00:00:00.001 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v22.11" build number 200 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3702 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.053 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.054 The recommended git tool is: git 00:00:00.055 using credential 00000000-0000-0000-0000-000000000002 00:00:00.057 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.071 Fetching changes from the remote Git repository 00:00:00.078 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.106 Using shallow fetch with depth 1 00:00:00.106 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.106 > git --version # timeout=10 00:00:00.131 > git --version # 'git version 2.39.2' 00:00:00.131 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.169 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.169 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.391 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.405 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.418 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.418 > git config core.sparsecheckout # timeout=10 00:00:03.431 > git read-tree -mu HEAD # timeout=10 00:00:03.454 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.479 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.479 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.713 [Pipeline] Start of Pipeline 00:00:03.727 [Pipeline] library 00:00:03.728 Loading library shm_lib@master 00:00:03.728 Library shm_lib@master is cached. Copying from home. 00:00:03.744 [Pipeline] node 00:00:03.757 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:03.759 [Pipeline] { 00:00:03.767 [Pipeline] catchError 00:00:03.768 [Pipeline] { 00:00:03.778 [Pipeline] wrap 00:00:03.784 [Pipeline] { 00:00:03.797 [Pipeline] stage 00:00:03.798 [Pipeline] { (Prologue) 00:00:03.813 [Pipeline] echo 00:00:03.815 Node: VM-host-WFP7 00:00:03.819 [Pipeline] cleanWs 00:00:03.827 [WS-CLEANUP] Deleting project workspace... 00:00:03.827 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.833 [WS-CLEANUP] done 00:00:03.997 [Pipeline] setCustomBuildProperty 00:00:04.081 [Pipeline] httpRequest 00:00:04.399 [Pipeline] echo 00:00:04.401 Sorcerer 10.211.164.20 is alive 00:00:04.412 [Pipeline] retry 00:00:04.414 [Pipeline] { 00:00:04.429 [Pipeline] httpRequest 00:00:04.434 HttpMethod: GET 00:00:04.434 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.435 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.436 Response Code: HTTP/1.1 200 OK 00:00:04.436 Success: Status code 200 is in the accepted range: 200,404 00:00:04.436 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.582 [Pipeline] } 00:00:04.598 [Pipeline] // retry 00:00:04.605 [Pipeline] sh 00:00:04.893 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.907 [Pipeline] httpRequest 00:00:05.235 [Pipeline] echo 00:00:05.237 Sorcerer 10.211.164.20 is alive 00:00:05.244 [Pipeline] retry 00:00:05.245 [Pipeline] { 00:00:05.254 [Pipeline] httpRequest 00:00:05.258 HttpMethod: GET 00:00:05.259 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:05.259 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:05.260 Response Code: HTTP/1.1 200 OK 00:00:05.260 Success: Status code 200 is in the accepted range: 200,404 00:00:05.261 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:21.158 [Pipeline] } 00:00:21.176 [Pipeline] // retry 00:00:21.184 [Pipeline] sh 00:00:21.470 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:24.027 [Pipeline] sh 00:00:24.314 + git -C spdk log --oneline -n5 00:00:24.314 b18e1bd62 version: v24.09.1-pre 00:00:24.314 19524ad45 version: v24.09 00:00:24.314 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:00:24.314 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:00:24.314 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:00:24.337 [Pipeline] withCredentials 00:00:24.349 > git --version # timeout=10 00:00:24.362 > git --version # 'git version 2.39.2' 00:00:24.381 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:24.384 [Pipeline] { 00:00:24.395 [Pipeline] retry 00:00:24.397 [Pipeline] { 00:00:24.413 [Pipeline] sh 00:00:24.700 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:24.975 [Pipeline] } 00:00:24.994 [Pipeline] // retry 00:00:25.000 [Pipeline] } 00:00:25.017 [Pipeline] // withCredentials 00:00:25.028 [Pipeline] httpRequest 00:00:25.348 [Pipeline] echo 00:00:25.351 Sorcerer 10.211.164.20 is alive 00:00:25.362 [Pipeline] retry 00:00:25.364 [Pipeline] { 00:00:25.379 [Pipeline] httpRequest 00:00:25.385 HttpMethod: GET 00:00:25.386 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:25.386 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:25.398 Response Code: HTTP/1.1 200 OK 00:00:25.399 Success: Status code 200 is in the accepted range: 200,404 00:00:25.400 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:48.258 [Pipeline] } 00:00:48.268 [Pipeline] // retry 00:00:48.273 [Pipeline] sh 00:00:48.551 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:49.950 [Pipeline] sh 00:00:50.243 + git -C dpdk log --oneline -n5 00:00:50.243 caf0f5d395 version: 22.11.4 00:00:50.243 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:00:50.243 dc9c799c7d vhost: fix missing spinlock unlock 00:00:50.243 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:00:50.243 6ef77f2a5e net/gve: fix RX buffer size alignment 00:00:50.264 [Pipeline] writeFile 00:00:50.281 [Pipeline] sh 00:00:50.570 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:50.587 [Pipeline] sh 00:00:50.896 + cat autorun-spdk.conf 00:00:50.896 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.896 SPDK_RUN_ASAN=1 00:00:50.896 SPDK_RUN_UBSAN=1 00:00:50.896 SPDK_TEST_RAID=1 00:00:50.896 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:50.896 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:50.896 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:50.904 RUN_NIGHTLY=1 00:00:50.907 [Pipeline] } 00:00:50.922 [Pipeline] // stage 00:00:50.939 [Pipeline] stage 00:00:50.942 [Pipeline] { (Run VM) 00:00:50.955 [Pipeline] sh 00:00:51.242 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:51.242 + echo 'Start stage prepare_nvme.sh' 00:00:51.242 Start stage prepare_nvme.sh 00:00:51.242 + [[ -n 4 ]] 00:00:51.242 + disk_prefix=ex4 00:00:51.242 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:51.242 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:51.242 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:51.242 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:51.242 ++ SPDK_RUN_ASAN=1 00:00:51.242 ++ SPDK_RUN_UBSAN=1 00:00:51.242 ++ SPDK_TEST_RAID=1 00:00:51.242 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:51.242 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:51.242 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:51.242 ++ RUN_NIGHTLY=1 00:00:51.242 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:51.242 + nvme_files=() 00:00:51.242 + declare -A nvme_files 00:00:51.242 + backend_dir=/var/lib/libvirt/images/backends 00:00:51.242 + nvme_files['nvme.img']=5G 00:00:51.242 + nvme_files['nvme-cmb.img']=5G 00:00:51.242 + nvme_files['nvme-multi0.img']=4G 00:00:51.242 + nvme_files['nvme-multi1.img']=4G 00:00:51.242 + nvme_files['nvme-multi2.img']=4G 00:00:51.242 + nvme_files['nvme-openstack.img']=8G 00:00:51.242 + nvme_files['nvme-zns.img']=5G 00:00:51.242 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:51.242 + (( SPDK_TEST_FTL == 1 )) 00:00:51.242 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:51.242 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:51.242 + for nvme in "${!nvme_files[@]}" 00:00:51.242 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:00:51.242 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:51.242 + for nvme in "${!nvme_files[@]}" 00:00:51.242 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:00:51.242 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:51.242 + for nvme in "${!nvme_files[@]}" 00:00:51.242 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:00:51.242 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:51.242 + for nvme in "${!nvme_files[@]}" 00:00:51.242 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:00:51.242 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:51.242 + for nvme in "${!nvme_files[@]}" 00:00:51.242 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:00:51.242 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:51.242 + for nvme in "${!nvme_files[@]}" 00:00:51.242 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:00:51.242 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:51.242 + for nvme in "${!nvme_files[@]}" 00:00:51.242 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:00:51.503 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:51.503 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:00:51.503 + echo 'End stage prepare_nvme.sh' 00:00:51.503 End stage prepare_nvme.sh 00:00:51.518 [Pipeline] sh 00:00:51.811 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:51.811 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:00:51.811 00:00:51.811 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:51.811 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:51.811 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:51.811 HELP=0 00:00:51.811 DRY_RUN=0 00:00:51.811 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:00:51.811 NVME_DISKS_TYPE=nvme,nvme, 00:00:51.811 NVME_AUTO_CREATE=0 00:00:51.811 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:00:51.811 NVME_CMB=,, 00:00:51.811 NVME_PMR=,, 00:00:51.811 NVME_ZNS=,, 00:00:51.811 NVME_MS=,, 00:00:51.811 NVME_FDP=,, 00:00:51.811 SPDK_VAGRANT_DISTRO=fedora39 00:00:51.811 SPDK_VAGRANT_VMCPU=10 00:00:51.811 SPDK_VAGRANT_VMRAM=12288 00:00:51.811 SPDK_VAGRANT_PROVIDER=libvirt 00:00:51.811 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:51.811 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:51.811 SPDK_OPENSTACK_NETWORK=0 00:00:51.811 VAGRANT_PACKAGE_BOX=0 00:00:51.811 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:51.811 FORCE_DISTRO=true 00:00:51.811 VAGRANT_BOX_VERSION= 00:00:51.811 EXTRA_VAGRANTFILES= 00:00:51.811 NIC_MODEL=virtio 00:00:51.811 00:00:51.811 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:51.811 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:53.721 Bringing machine 'default' up with 'libvirt' provider... 00:00:53.980 ==> default: Creating image (snapshot of base box volume). 00:00:54.239 ==> default: Creating domain with the following settings... 00:00:54.239 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733485431_fdf62a8144354f05d94e 00:00:54.239 ==> default: -- Domain type: kvm 00:00:54.239 ==> default: -- Cpus: 10 00:00:54.239 ==> default: -- Feature: acpi 00:00:54.239 ==> default: -- Feature: apic 00:00:54.239 ==> default: -- Feature: pae 00:00:54.239 ==> default: -- Memory: 12288M 00:00:54.239 ==> default: -- Memory Backing: hugepages: 00:00:54.239 ==> default: -- Management MAC: 00:00:54.239 ==> default: -- Loader: 00:00:54.239 ==> default: -- Nvram: 00:00:54.239 ==> default: -- Base box: spdk/fedora39 00:00:54.239 ==> default: -- Storage pool: default 00:00:54.239 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733485431_fdf62a8144354f05d94e.img (20G) 00:00:54.239 ==> default: -- Volume Cache: default 00:00:54.239 ==> default: -- Kernel: 00:00:54.239 ==> default: -- Initrd: 00:00:54.239 ==> default: -- Graphics Type: vnc 00:00:54.239 ==> default: -- Graphics Port: -1 00:00:54.239 ==> default: -- Graphics IP: 127.0.0.1 00:00:54.240 ==> default: -- Graphics Password: Not defined 00:00:54.240 ==> default: -- Video Type: cirrus 00:00:54.240 ==> default: -- Video VRAM: 9216 00:00:54.240 ==> default: -- Sound Type: 00:00:54.240 ==> default: -- Keymap: en-us 00:00:54.240 ==> default: -- TPM Path: 00:00:54.240 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:54.240 ==> default: -- Command line args: 00:00:54.240 ==> default: -> value=-device, 00:00:54.240 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:54.240 ==> default: -> value=-drive, 00:00:54.240 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:00:54.240 ==> default: -> value=-device, 00:00:54.240 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:54.240 ==> default: -> value=-device, 00:00:54.240 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:54.240 ==> default: -> value=-drive, 00:00:54.240 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:54.240 ==> default: -> value=-device, 00:00:54.240 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:54.240 ==> default: -> value=-drive, 00:00:54.240 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:54.240 ==> default: -> value=-device, 00:00:54.240 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:54.240 ==> default: -> value=-drive, 00:00:54.240 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:54.240 ==> default: -> value=-device, 00:00:54.240 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:54.240 ==> default: Creating shared folders metadata... 00:00:54.240 ==> default: Starting domain. 00:00:56.147 ==> default: Waiting for domain to get an IP address... 00:01:14.263 ==> default: Waiting for SSH to become available... 00:01:14.264 ==> default: Configuring and enabling network interfaces... 00:01:19.551 default: SSH address: 192.168.121.73:22 00:01:19.551 default: SSH username: vagrant 00:01:19.551 default: SSH auth method: private key 00:01:21.464 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:29.697 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:36.275 ==> default: Mounting SSHFS shared folder... 00:01:37.659 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:37.659 ==> default: Checking Mount.. 00:01:39.570 ==> default: Folder Successfully Mounted! 00:01:39.570 ==> default: Running provisioner: file... 00:01:40.509 default: ~/.gitconfig => .gitconfig 00:01:41.078 00:01:41.078 SUCCESS! 00:01:41.078 00:01:41.078 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:41.078 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:41.078 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:41.078 00:01:41.088 [Pipeline] } 00:01:41.103 [Pipeline] // stage 00:01:41.112 [Pipeline] dir 00:01:41.113 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:41.115 [Pipeline] { 00:01:41.128 [Pipeline] catchError 00:01:41.129 [Pipeline] { 00:01:41.142 [Pipeline] sh 00:01:41.426 + vagrant ssh-config --host vagrant 00:01:41.426 + sed -ne /^Host/,$p 00:01:41.426 + tee ssh_conf 00:01:43.964 Host vagrant 00:01:43.964 HostName 192.168.121.73 00:01:43.964 User vagrant 00:01:43.964 Port 22 00:01:43.964 UserKnownHostsFile /dev/null 00:01:43.964 StrictHostKeyChecking no 00:01:43.964 PasswordAuthentication no 00:01:43.964 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:43.964 IdentitiesOnly yes 00:01:43.964 LogLevel FATAL 00:01:43.964 ForwardAgent yes 00:01:43.964 ForwardX11 yes 00:01:43.964 00:01:43.978 [Pipeline] withEnv 00:01:43.980 [Pipeline] { 00:01:43.992 [Pipeline] sh 00:01:44.275 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:44.275 source /etc/os-release 00:01:44.275 [[ -e /image.version ]] && img=$(< /image.version) 00:01:44.275 # Minimal, systemd-like check. 00:01:44.275 if [[ -e /.dockerenv ]]; then 00:01:44.275 # Clear garbage from the node's name: 00:01:44.275 # agt-er_autotest_547-896 -> autotest_547-896 00:01:44.275 # $HOSTNAME is the actual container id 00:01:44.275 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:44.275 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:44.275 # We can assume this is a mount from a host where container is running, 00:01:44.275 # so fetch its hostname to easily identify the target swarm worker. 00:01:44.275 container="$(< /etc/hostname) ($agent)" 00:01:44.275 else 00:01:44.275 # Fallback 00:01:44.275 container=$agent 00:01:44.275 fi 00:01:44.275 fi 00:01:44.275 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:44.275 00:01:44.548 [Pipeline] } 00:01:44.564 [Pipeline] // withEnv 00:01:44.573 [Pipeline] setCustomBuildProperty 00:01:44.588 [Pipeline] stage 00:01:44.590 [Pipeline] { (Tests) 00:01:44.607 [Pipeline] sh 00:01:44.890 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:45.171 [Pipeline] sh 00:01:45.471 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:45.748 [Pipeline] timeout 00:01:45.748 Timeout set to expire in 1 hr 30 min 00:01:45.750 [Pipeline] { 00:01:45.764 [Pipeline] sh 00:01:46.050 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:46.620 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:01:46.633 [Pipeline] sh 00:01:46.914 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:47.187 [Pipeline] sh 00:01:47.473 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:47.745 [Pipeline] sh 00:01:48.023 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:48.283 ++ readlink -f spdk_repo 00:01:48.283 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:48.283 + [[ -n /home/vagrant/spdk_repo ]] 00:01:48.283 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:48.283 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:48.283 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:48.283 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:48.283 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:48.283 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:48.283 + cd /home/vagrant/spdk_repo 00:01:48.283 + source /etc/os-release 00:01:48.283 ++ NAME='Fedora Linux' 00:01:48.283 ++ VERSION='39 (Cloud Edition)' 00:01:48.283 ++ ID=fedora 00:01:48.283 ++ VERSION_ID=39 00:01:48.283 ++ VERSION_CODENAME= 00:01:48.283 ++ PLATFORM_ID=platform:f39 00:01:48.283 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:48.283 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:48.283 ++ LOGO=fedora-logo-icon 00:01:48.283 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:48.283 ++ HOME_URL=https://fedoraproject.org/ 00:01:48.283 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:48.284 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:48.284 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:48.284 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:48.284 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:48.284 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:48.284 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:48.284 ++ SUPPORT_END=2024-11-12 00:01:48.284 ++ VARIANT='Cloud Edition' 00:01:48.284 ++ VARIANT_ID=cloud 00:01:48.284 + uname -a 00:01:48.284 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:48.284 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:48.881 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:48.881 Hugepages 00:01:48.881 node hugesize free / total 00:01:48.881 node0 1048576kB 0 / 0 00:01:48.881 node0 2048kB 0 / 0 00:01:48.881 00:01:48.881 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:48.881 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:48.881 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:48.881 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:48.881 + rm -f /tmp/spdk-ld-path 00:01:48.881 + source autorun-spdk.conf 00:01:48.881 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:48.881 ++ SPDK_RUN_ASAN=1 00:01:48.881 ++ SPDK_RUN_UBSAN=1 00:01:48.881 ++ SPDK_TEST_RAID=1 00:01:48.881 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:48.881 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:48.881 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:48.881 ++ RUN_NIGHTLY=1 00:01:48.881 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:48.881 + [[ -n '' ]] 00:01:48.881 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:48.881 + for M in /var/spdk/build-*-manifest.txt 00:01:48.881 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:48.881 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:48.881 + for M in /var/spdk/build-*-manifest.txt 00:01:48.881 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:48.881 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:48.881 + for M in /var/spdk/build-*-manifest.txt 00:01:48.881 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:48.881 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:48.881 ++ uname 00:01:48.881 + [[ Linux == \L\i\n\u\x ]] 00:01:48.881 + sudo dmesg -T 00:01:49.139 + sudo dmesg --clear 00:01:49.139 + dmesg_pid=6159 00:01:49.139 + [[ Fedora Linux == FreeBSD ]] 00:01:49.139 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:49.139 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:49.139 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:49.139 + sudo dmesg -Tw 00:01:49.139 + [[ -x /usr/src/fio-static/fio ]] 00:01:49.139 + export FIO_BIN=/usr/src/fio-static/fio 00:01:49.139 + FIO_BIN=/usr/src/fio-static/fio 00:01:49.139 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:49.139 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:49.139 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:49.139 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:49.139 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:49.139 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:49.139 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:49.139 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:49.139 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:49.139 Test configuration: 00:01:49.139 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:49.139 SPDK_RUN_ASAN=1 00:01:49.139 SPDK_RUN_UBSAN=1 00:01:49.139 SPDK_TEST_RAID=1 00:01:49.139 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:49.139 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:49.139 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:49.139 RUN_NIGHTLY=1 11:44:46 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:49.139 11:44:46 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:49.139 11:44:46 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:49.139 11:44:46 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:49.139 11:44:46 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:49.139 11:44:46 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:49.139 11:44:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.139 11:44:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.139 11:44:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.139 11:44:46 -- paths/export.sh@5 -- $ export PATH 00:01:49.139 11:44:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.139 11:44:46 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:49.139 11:44:46 -- common/autobuild_common.sh@479 -- $ date +%s 00:01:49.139 11:44:46 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1733485486.XXXXXX 00:01:49.139 11:44:46 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1733485486.ImfSMk 00:01:49.139 11:44:46 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:01:49.139 11:44:46 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:01:49.139 11:44:46 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:49.139 11:44:46 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:01:49.139 11:44:46 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:49.139 11:44:46 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:49.139 11:44:46 -- common/autobuild_common.sh@495 -- $ get_config_params 00:01:49.140 11:44:46 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:49.140 11:44:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.398 11:44:46 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:01:49.398 11:44:46 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:01:49.398 11:44:46 -- pm/common@17 -- $ local monitor 00:01:49.398 11:44:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.398 11:44:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.398 11:44:46 -- pm/common@25 -- $ sleep 1 00:01:49.398 11:44:46 -- pm/common@21 -- $ date +%s 00:01:49.398 11:44:46 -- pm/common@21 -- $ date +%s 00:01:49.398 11:44:46 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733485486 00:01:49.398 11:44:46 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733485486 00:01:49.398 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733485486_collect-vmstat.pm.log 00:01:49.398 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733485486_collect-cpu-load.pm.log 00:01:50.337 11:44:47 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:01:50.337 11:44:47 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:50.337 11:44:47 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:50.337 11:44:47 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:50.337 11:44:47 -- spdk/autobuild.sh@16 -- $ date -u 00:01:50.337 Fri Dec 6 11:44:47 AM UTC 2024 00:01:50.337 11:44:47 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:50.337 v24.09-1-gb18e1bd62 00:01:50.337 11:44:47 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:50.337 11:44:47 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:50.337 11:44:47 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:50.337 11:44:47 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:50.337 11:44:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.337 ************************************ 00:01:50.337 START TEST asan 00:01:50.337 ************************************ 00:01:50.337 using asan 00:01:50.337 11:44:47 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:01:50.337 00:01:50.337 real 0m0.001s 00:01:50.337 user 0m0.000s 00:01:50.337 sys 0m0.000s 00:01:50.337 11:44:47 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:50.337 11:44:47 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:50.337 ************************************ 00:01:50.337 END TEST asan 00:01:50.337 ************************************ 00:01:50.337 11:44:48 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:50.337 11:44:48 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:50.337 11:44:48 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:50.337 11:44:48 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:50.337 11:44:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.337 ************************************ 00:01:50.337 START TEST ubsan 00:01:50.337 ************************************ 00:01:50.337 using ubsan 00:01:50.337 11:44:48 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:50.337 00:01:50.337 real 0m0.001s 00:01:50.338 user 0m0.001s 00:01:50.338 sys 0m0.000s 00:01:50.338 11:44:48 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:50.338 11:44:48 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:50.338 ************************************ 00:01:50.338 END TEST ubsan 00:01:50.338 ************************************ 00:01:50.338 11:44:48 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:50.338 11:44:48 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:50.338 11:44:48 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:50.338 11:44:48 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:50.338 11:44:48 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:50.338 11:44:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.598 ************************************ 00:01:50.598 START TEST build_native_dpdk 00:01:50.598 ************************************ 00:01:50.598 11:44:48 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:01:50.598 11:44:48 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:50.598 11:44:48 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:50.598 11:44:48 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:50.598 11:44:48 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:50.598 11:44:48 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:50.598 11:44:48 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:50.598 11:44:48 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:50.598 11:44:48 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:50.598 11:44:48 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:50.598 11:44:48 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:50.598 11:44:48 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:50.598 11:44:48 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:50.598 11:44:48 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:50.598 11:44:48 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:50.598 11:44:48 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:01:50.598 11:44:48 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:50.598 11:44:48 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:01:50.598 11:44:48 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:01:50.598 11:44:48 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:01:50.598 11:44:48 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:01:50.599 caf0f5d395 version: 22.11.4 00:01:50.599 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:50.599 dc9c799c7d vhost: fix missing spinlock unlock 00:01:50.599 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:50.599 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:50.599 11:44:48 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:50.599 11:44:48 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:50.599 11:44:48 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:50.599 11:44:48 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:50.599 11:44:48 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:50.599 11:44:48 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:50.599 11:44:48 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:50.599 11:44:48 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:50.599 11:44:48 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:50.599 11:44:48 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:50.599 11:44:48 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:50.599 11:44:48 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:50.599 11:44:48 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:50.599 11:44:48 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:50.599 11:44:48 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:01:50.599 11:44:48 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:50.599 11:44:48 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:50.599 11:44:48 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:50.599 11:44:48 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:50.599 patching file config/rte_config.h 00:01:50.599 Hunk #1 succeeded at 60 (offset 1 line). 00:01:50.599 11:44:48 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:01:50.599 11:44:48 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:50.599 patching file lib/pcapng/rte_pcapng.c 00:01:50.599 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:50.599 11:44:48 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:50.599 11:44:48 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:01:50.599 11:44:48 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:01:50.599 11:44:48 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:01:50.599 11:44:48 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:01:50.599 11:44:48 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:50.599 11:44:48 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:55.885 The Meson build system 00:01:55.885 Version: 1.5.0 00:01:55.885 Source dir: /home/vagrant/spdk_repo/dpdk 00:01:55.885 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:01:55.885 Build type: native build 00:01:55.885 Program cat found: YES (/usr/bin/cat) 00:01:55.885 Project name: DPDK 00:01:55.885 Project version: 22.11.4 00:01:55.885 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:55.885 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:55.885 Host machine cpu family: x86_64 00:01:55.885 Host machine cpu: x86_64 00:01:55.885 Message: ## Building in Developer Mode ## 00:01:55.885 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:55.885 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:01:55.885 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:01:55.885 Program objdump found: YES (/usr/bin/objdump) 00:01:55.885 Program python3 found: YES (/usr/bin/python3) 00:01:55.885 Program cat found: YES (/usr/bin/cat) 00:01:55.885 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:55.885 Checking for size of "void *" : 8 00:01:55.885 Checking for size of "void *" : 8 (cached) 00:01:55.885 Library m found: YES 00:01:55.885 Library numa found: YES 00:01:55.885 Has header "numaif.h" : YES 00:01:55.885 Library fdt found: NO 00:01:55.885 Library execinfo found: NO 00:01:55.885 Has header "execinfo.h" : YES 00:01:55.885 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:55.885 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:55.885 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:55.885 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:55.885 Run-time dependency openssl found: YES 3.1.1 00:01:55.885 Run-time dependency libpcap found: YES 1.10.4 00:01:55.885 Has header "pcap.h" with dependency libpcap: YES 00:01:55.885 Compiler for C supports arguments -Wcast-qual: YES 00:01:55.885 Compiler for C supports arguments -Wdeprecated: YES 00:01:55.885 Compiler for C supports arguments -Wformat: YES 00:01:55.885 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:55.885 Compiler for C supports arguments -Wformat-security: NO 00:01:55.885 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:55.885 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:55.885 Compiler for C supports arguments -Wnested-externs: YES 00:01:55.885 Compiler for C supports arguments -Wold-style-definition: YES 00:01:55.885 Compiler for C supports arguments -Wpointer-arith: YES 00:01:55.885 Compiler for C supports arguments -Wsign-compare: YES 00:01:55.885 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:55.885 Compiler for C supports arguments -Wundef: YES 00:01:55.885 Compiler for C supports arguments -Wwrite-strings: YES 00:01:55.885 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:55.885 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:55.885 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:55.885 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:55.885 Compiler for C supports arguments -mavx512f: YES 00:01:55.885 Checking if "AVX512 checking" compiles: YES 00:01:55.885 Fetching value of define "__SSE4_2__" : 1 00:01:55.885 Fetching value of define "__AES__" : 1 00:01:55.885 Fetching value of define "__AVX__" : 1 00:01:55.885 Fetching value of define "__AVX2__" : 1 00:01:55.885 Fetching value of define "__AVX512BW__" : 1 00:01:55.885 Fetching value of define "__AVX512CD__" : 1 00:01:55.885 Fetching value of define "__AVX512DQ__" : 1 00:01:55.885 Fetching value of define "__AVX512F__" : 1 00:01:55.885 Fetching value of define "__AVX512VL__" : 1 00:01:55.885 Fetching value of define "__PCLMUL__" : 1 00:01:55.885 Fetching value of define "__RDRND__" : 1 00:01:55.885 Fetching value of define "__RDSEED__" : 1 00:01:55.885 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:55.885 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:55.885 Message: lib/kvargs: Defining dependency "kvargs" 00:01:55.885 Message: lib/telemetry: Defining dependency "telemetry" 00:01:55.885 Checking for function "getentropy" : YES 00:01:55.885 Message: lib/eal: Defining dependency "eal" 00:01:55.885 Message: lib/ring: Defining dependency "ring" 00:01:55.885 Message: lib/rcu: Defining dependency "rcu" 00:01:55.885 Message: lib/mempool: Defining dependency "mempool" 00:01:55.885 Message: lib/mbuf: Defining dependency "mbuf" 00:01:55.885 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:55.885 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:55.885 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:55.885 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:55.885 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:55.885 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:55.885 Compiler for C supports arguments -mpclmul: YES 00:01:55.885 Compiler for C supports arguments -maes: YES 00:01:55.885 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:55.885 Compiler for C supports arguments -mavx512bw: YES 00:01:55.885 Compiler for C supports arguments -mavx512dq: YES 00:01:55.885 Compiler for C supports arguments -mavx512vl: YES 00:01:55.885 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:55.885 Compiler for C supports arguments -mavx2: YES 00:01:55.885 Compiler for C supports arguments -mavx: YES 00:01:55.885 Message: lib/net: Defining dependency "net" 00:01:55.885 Message: lib/meter: Defining dependency "meter" 00:01:55.885 Message: lib/ethdev: Defining dependency "ethdev" 00:01:55.885 Message: lib/pci: Defining dependency "pci" 00:01:55.885 Message: lib/cmdline: Defining dependency "cmdline" 00:01:55.885 Message: lib/metrics: Defining dependency "metrics" 00:01:55.885 Message: lib/hash: Defining dependency "hash" 00:01:55.885 Message: lib/timer: Defining dependency "timer" 00:01:55.885 Fetching value of define "__AVX2__" : 1 (cached) 00:01:55.885 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:55.885 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:55.885 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:55.885 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:55.885 Message: lib/acl: Defining dependency "acl" 00:01:55.885 Message: lib/bbdev: Defining dependency "bbdev" 00:01:55.885 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:55.885 Run-time dependency libelf found: YES 0.191 00:01:55.885 Message: lib/bpf: Defining dependency "bpf" 00:01:55.885 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:55.885 Message: lib/compressdev: Defining dependency "compressdev" 00:01:55.885 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:55.885 Message: lib/distributor: Defining dependency "distributor" 00:01:55.885 Message: lib/efd: Defining dependency "efd" 00:01:55.885 Message: lib/eventdev: Defining dependency "eventdev" 00:01:55.885 Message: lib/gpudev: Defining dependency "gpudev" 00:01:55.885 Message: lib/gro: Defining dependency "gro" 00:01:55.885 Message: lib/gso: Defining dependency "gso" 00:01:55.885 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:55.885 Message: lib/jobstats: Defining dependency "jobstats" 00:01:55.885 Message: lib/latencystats: Defining dependency "latencystats" 00:01:55.885 Message: lib/lpm: Defining dependency "lpm" 00:01:55.885 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:55.885 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:55.885 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:55.885 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:55.885 Message: lib/member: Defining dependency "member" 00:01:55.885 Message: lib/pcapng: Defining dependency "pcapng" 00:01:55.885 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:55.885 Message: lib/power: Defining dependency "power" 00:01:55.885 Message: lib/rawdev: Defining dependency "rawdev" 00:01:55.885 Message: lib/regexdev: Defining dependency "regexdev" 00:01:55.885 Message: lib/dmadev: Defining dependency "dmadev" 00:01:55.885 Message: lib/rib: Defining dependency "rib" 00:01:55.885 Message: lib/reorder: Defining dependency "reorder" 00:01:55.885 Message: lib/sched: Defining dependency "sched" 00:01:55.885 Message: lib/security: Defining dependency "security" 00:01:55.885 Message: lib/stack: Defining dependency "stack" 00:01:55.885 Has header "linux/userfaultfd.h" : YES 00:01:55.885 Message: lib/vhost: Defining dependency "vhost" 00:01:55.885 Message: lib/ipsec: Defining dependency "ipsec" 00:01:55.885 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:55.885 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:55.885 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:55.885 Message: lib/fib: Defining dependency "fib" 00:01:55.885 Message: lib/port: Defining dependency "port" 00:01:55.885 Message: lib/pdump: Defining dependency "pdump" 00:01:55.885 Message: lib/table: Defining dependency "table" 00:01:55.886 Message: lib/pipeline: Defining dependency "pipeline" 00:01:55.886 Message: lib/graph: Defining dependency "graph" 00:01:55.886 Message: lib/node: Defining dependency "node" 00:01:55.886 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:55.886 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:55.886 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:55.886 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:55.886 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:55.886 Compiler for C supports arguments -Wno-unused-value: YES 00:01:55.886 Compiler for C supports arguments -Wno-format: YES 00:01:55.886 Compiler for C supports arguments -Wno-format-security: YES 00:01:55.886 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:55.886 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:57.269 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:57.269 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:57.270 Fetching value of define "__AVX2__" : 1 (cached) 00:01:57.270 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:57.270 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:57.270 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:57.270 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:57.270 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:57.270 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:57.270 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:57.270 Configuring doxy-api.conf using configuration 00:01:57.270 Program sphinx-build found: NO 00:01:57.270 Configuring rte_build_config.h using configuration 00:01:57.270 Message: 00:01:57.270 ================= 00:01:57.270 Applications Enabled 00:01:57.270 ================= 00:01:57.270 00:01:57.270 apps: 00:01:57.270 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:57.270 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:57.270 test-security-perf, 00:01:57.270 00:01:57.270 Message: 00:01:57.270 ================= 00:01:57.270 Libraries Enabled 00:01:57.270 ================= 00:01:57.270 00:01:57.270 libs: 00:01:57.270 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:57.270 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:57.270 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:57.270 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:57.270 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:57.270 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:57.270 table, pipeline, graph, node, 00:01:57.270 00:01:57.270 Message: 00:01:57.270 =============== 00:01:57.270 Drivers Enabled 00:01:57.270 =============== 00:01:57.270 00:01:57.270 common: 00:01:57.270 00:01:57.270 bus: 00:01:57.270 pci, vdev, 00:01:57.270 mempool: 00:01:57.270 ring, 00:01:57.270 dma: 00:01:57.270 00:01:57.270 net: 00:01:57.270 i40e, 00:01:57.270 raw: 00:01:57.270 00:01:57.270 crypto: 00:01:57.270 00:01:57.270 compress: 00:01:57.270 00:01:57.270 regex: 00:01:57.270 00:01:57.270 vdpa: 00:01:57.270 00:01:57.270 event: 00:01:57.270 00:01:57.270 baseband: 00:01:57.270 00:01:57.270 gpu: 00:01:57.270 00:01:57.270 00:01:57.270 Message: 00:01:57.270 ================= 00:01:57.270 Content Skipped 00:01:57.270 ================= 00:01:57.270 00:01:57.270 apps: 00:01:57.270 00:01:57.270 libs: 00:01:57.270 kni: explicitly disabled via build config (deprecated lib) 00:01:57.270 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:57.270 00:01:57.270 drivers: 00:01:57.270 common/cpt: not in enabled drivers build config 00:01:57.270 common/dpaax: not in enabled drivers build config 00:01:57.270 common/iavf: not in enabled drivers build config 00:01:57.270 common/idpf: not in enabled drivers build config 00:01:57.270 common/mvep: not in enabled drivers build config 00:01:57.270 common/octeontx: not in enabled drivers build config 00:01:57.270 bus/auxiliary: not in enabled drivers build config 00:01:57.270 bus/dpaa: not in enabled drivers build config 00:01:57.270 bus/fslmc: not in enabled drivers build config 00:01:57.270 bus/ifpga: not in enabled drivers build config 00:01:57.270 bus/vmbus: not in enabled drivers build config 00:01:57.270 common/cnxk: not in enabled drivers build config 00:01:57.270 common/mlx5: not in enabled drivers build config 00:01:57.270 common/qat: not in enabled drivers build config 00:01:57.270 common/sfc_efx: not in enabled drivers build config 00:01:57.270 mempool/bucket: not in enabled drivers build config 00:01:57.270 mempool/cnxk: not in enabled drivers build config 00:01:57.270 mempool/dpaa: not in enabled drivers build config 00:01:57.270 mempool/dpaa2: not in enabled drivers build config 00:01:57.270 mempool/octeontx: not in enabled drivers build config 00:01:57.270 mempool/stack: not in enabled drivers build config 00:01:57.270 dma/cnxk: not in enabled drivers build config 00:01:57.270 dma/dpaa: not in enabled drivers build config 00:01:57.270 dma/dpaa2: not in enabled drivers build config 00:01:57.270 dma/hisilicon: not in enabled drivers build config 00:01:57.270 dma/idxd: not in enabled drivers build config 00:01:57.270 dma/ioat: not in enabled drivers build config 00:01:57.270 dma/skeleton: not in enabled drivers build config 00:01:57.270 net/af_packet: not in enabled drivers build config 00:01:57.270 net/af_xdp: not in enabled drivers build config 00:01:57.270 net/ark: not in enabled drivers build config 00:01:57.270 net/atlantic: not in enabled drivers build config 00:01:57.270 net/avp: not in enabled drivers build config 00:01:57.270 net/axgbe: not in enabled drivers build config 00:01:57.270 net/bnx2x: not in enabled drivers build config 00:01:57.270 net/bnxt: not in enabled drivers build config 00:01:57.270 net/bonding: not in enabled drivers build config 00:01:57.270 net/cnxk: not in enabled drivers build config 00:01:57.270 net/cxgbe: not in enabled drivers build config 00:01:57.270 net/dpaa: not in enabled drivers build config 00:01:57.270 net/dpaa2: not in enabled drivers build config 00:01:57.270 net/e1000: not in enabled drivers build config 00:01:57.270 net/ena: not in enabled drivers build config 00:01:57.270 net/enetc: not in enabled drivers build config 00:01:57.270 net/enetfec: not in enabled drivers build config 00:01:57.270 net/enic: not in enabled drivers build config 00:01:57.270 net/failsafe: not in enabled drivers build config 00:01:57.270 net/fm10k: not in enabled drivers build config 00:01:57.270 net/gve: not in enabled drivers build config 00:01:57.270 net/hinic: not in enabled drivers build config 00:01:57.270 net/hns3: not in enabled drivers build config 00:01:57.270 net/iavf: not in enabled drivers build config 00:01:57.270 net/ice: not in enabled drivers build config 00:01:57.270 net/idpf: not in enabled drivers build config 00:01:57.270 net/igc: not in enabled drivers build config 00:01:57.270 net/ionic: not in enabled drivers build config 00:01:57.270 net/ipn3ke: not in enabled drivers build config 00:01:57.270 net/ixgbe: not in enabled drivers build config 00:01:57.270 net/kni: not in enabled drivers build config 00:01:57.270 net/liquidio: not in enabled drivers build config 00:01:57.270 net/mana: not in enabled drivers build config 00:01:57.270 net/memif: not in enabled drivers build config 00:01:57.270 net/mlx4: not in enabled drivers build config 00:01:57.270 net/mlx5: not in enabled drivers build config 00:01:57.270 net/mvneta: not in enabled drivers build config 00:01:57.270 net/mvpp2: not in enabled drivers build config 00:01:57.270 net/netvsc: not in enabled drivers build config 00:01:57.270 net/nfb: not in enabled drivers build config 00:01:57.270 net/nfp: not in enabled drivers build config 00:01:57.270 net/ngbe: not in enabled drivers build config 00:01:57.270 net/null: not in enabled drivers build config 00:01:57.270 net/octeontx: not in enabled drivers build config 00:01:57.270 net/octeon_ep: not in enabled drivers build config 00:01:57.270 net/pcap: not in enabled drivers build config 00:01:57.270 net/pfe: not in enabled drivers build config 00:01:57.270 net/qede: not in enabled drivers build config 00:01:57.270 net/ring: not in enabled drivers build config 00:01:57.270 net/sfc: not in enabled drivers build config 00:01:57.270 net/softnic: not in enabled drivers build config 00:01:57.270 net/tap: not in enabled drivers build config 00:01:57.270 net/thunderx: not in enabled drivers build config 00:01:57.270 net/txgbe: not in enabled drivers build config 00:01:57.270 net/vdev_netvsc: not in enabled drivers build config 00:01:57.270 net/vhost: not in enabled drivers build config 00:01:57.270 net/virtio: not in enabled drivers build config 00:01:57.270 net/vmxnet3: not in enabled drivers build config 00:01:57.270 raw/cnxk_bphy: not in enabled drivers build config 00:01:57.270 raw/cnxk_gpio: not in enabled drivers build config 00:01:57.270 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:57.270 raw/ifpga: not in enabled drivers build config 00:01:57.270 raw/ntb: not in enabled drivers build config 00:01:57.270 raw/skeleton: not in enabled drivers build config 00:01:57.270 crypto/armv8: not in enabled drivers build config 00:01:57.270 crypto/bcmfs: not in enabled drivers build config 00:01:57.270 crypto/caam_jr: not in enabled drivers build config 00:01:57.270 crypto/ccp: not in enabled drivers build config 00:01:57.270 crypto/cnxk: not in enabled drivers build config 00:01:57.270 crypto/dpaa_sec: not in enabled drivers build config 00:01:57.270 crypto/dpaa2_sec: not in enabled drivers build config 00:01:57.270 crypto/ipsec_mb: not in enabled drivers build config 00:01:57.270 crypto/mlx5: not in enabled drivers build config 00:01:57.270 crypto/mvsam: not in enabled drivers build config 00:01:57.270 crypto/nitrox: not in enabled drivers build config 00:01:57.270 crypto/null: not in enabled drivers build config 00:01:57.270 crypto/octeontx: not in enabled drivers build config 00:01:57.270 crypto/openssl: not in enabled drivers build config 00:01:57.270 crypto/scheduler: not in enabled drivers build config 00:01:57.270 crypto/uadk: not in enabled drivers build config 00:01:57.270 crypto/virtio: not in enabled drivers build config 00:01:57.270 compress/isal: not in enabled drivers build config 00:01:57.270 compress/mlx5: not in enabled drivers build config 00:01:57.270 compress/octeontx: not in enabled drivers build config 00:01:57.270 compress/zlib: not in enabled drivers build config 00:01:57.270 regex/mlx5: not in enabled drivers build config 00:01:57.270 regex/cn9k: not in enabled drivers build config 00:01:57.270 vdpa/ifc: not in enabled drivers build config 00:01:57.270 vdpa/mlx5: not in enabled drivers build config 00:01:57.270 vdpa/sfc: not in enabled drivers build config 00:01:57.270 event/cnxk: not in enabled drivers build config 00:01:57.270 event/dlb2: not in enabled drivers build config 00:01:57.270 event/dpaa: not in enabled drivers build config 00:01:57.270 event/dpaa2: not in enabled drivers build config 00:01:57.270 event/dsw: not in enabled drivers build config 00:01:57.270 event/opdl: not in enabled drivers build config 00:01:57.271 event/skeleton: not in enabled drivers build config 00:01:57.271 event/sw: not in enabled drivers build config 00:01:57.271 event/octeontx: not in enabled drivers build config 00:01:57.271 baseband/acc: not in enabled drivers build config 00:01:57.271 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:57.271 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:57.271 baseband/la12xx: not in enabled drivers build config 00:01:57.271 baseband/null: not in enabled drivers build config 00:01:57.271 baseband/turbo_sw: not in enabled drivers build config 00:01:57.271 gpu/cuda: not in enabled drivers build config 00:01:57.271 00:01:57.271 00:01:57.271 Build targets in project: 311 00:01:57.271 00:01:57.271 DPDK 22.11.4 00:01:57.271 00:01:57.271 User defined options 00:01:57.271 libdir : lib 00:01:57.271 prefix : /home/vagrant/spdk_repo/dpdk/build 00:01:57.271 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:57.271 c_link_args : 00:01:57.271 enable_docs : false 00:01:57.271 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:57.271 enable_kmods : false 00:01:57.271 machine : native 00:01:57.271 tests : false 00:01:57.271 00:01:57.271 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:57.271 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:57.271 11:44:54 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:01:57.271 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:01:57.530 [1/740] Generating lib/rte_kvargs_def with a custom command 00:01:57.530 [2/740] Generating lib/rte_kvargs_mingw with a custom command 00:01:57.530 [3/740] Generating lib/rte_telemetry_mingw with a custom command 00:01:57.530 [4/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:57.530 [5/740] Generating lib/rte_telemetry_def with a custom command 00:01:57.530 [6/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:57.530 [7/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:57.530 [8/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:57.530 [9/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:57.530 [10/740] Linking static target lib/librte_kvargs.a 00:01:57.530 [11/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:57.530 [12/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:57.530 [13/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:57.530 [14/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:57.530 [15/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:57.530 [16/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:57.530 [17/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:57.530 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:57.789 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:57.789 [20/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:57.789 [21/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.789 [22/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:57.790 [23/740] Linking target lib/librte_kvargs.so.23.0 00:01:57.790 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:57.790 [25/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:57.790 [26/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:57.790 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:57.790 [28/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:57.790 [29/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:57.790 [30/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:57.790 [31/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:57.790 [32/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:58.050 [33/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:58.050 [34/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:58.050 [35/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:58.050 [36/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:58.050 [37/740] Linking static target lib/librte_telemetry.a 00:01:58.050 [38/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:58.050 [39/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:58.050 [40/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:58.050 [41/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:58.050 [42/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:58.309 [43/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:58.310 [44/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:58.310 [45/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:58.310 [46/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:58.310 [47/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.310 [48/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:58.310 [49/740] Linking target lib/librte_telemetry.so.23.0 00:01:58.310 [50/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:58.310 [51/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:58.310 [52/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:58.310 [53/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:58.310 [54/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:58.310 [55/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:58.310 [56/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:58.310 [57/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:58.310 [58/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:58.310 [59/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:58.310 [60/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:58.310 [61/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:58.310 [62/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:58.310 [63/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:58.310 [64/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:58.569 [65/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:58.569 [66/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:58.569 [67/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:58.569 [68/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:58.569 [69/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:58.569 [70/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:58.569 [71/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:58.569 [72/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:58.569 [73/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:58.569 [74/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:58.569 [75/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:58.569 [76/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:58.569 [77/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:58.569 [78/740] Generating lib/rte_eal_def with a custom command 00:01:58.569 [79/740] Generating lib/rte_eal_mingw with a custom command 00:01:58.569 [80/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:58.569 [81/740] Generating lib/rte_ring_def with a custom command 00:01:58.569 [82/740] Generating lib/rte_ring_mingw with a custom command 00:01:58.569 [83/740] Generating lib/rte_rcu_def with a custom command 00:01:58.569 [84/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:58.569 [85/740] Generating lib/rte_rcu_mingw with a custom command 00:01:58.569 [86/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:58.828 [87/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:58.828 [88/740] Linking static target lib/librte_ring.a 00:01:58.828 [89/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:58.828 [90/740] Generating lib/rte_mempool_def with a custom command 00:01:58.828 [91/740] Generating lib/rte_mempool_mingw with a custom command 00:01:58.828 [92/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:58.828 [93/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:59.088 [94/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.088 [95/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:59.088 [96/740] Linking static target lib/librte_eal.a 00:01:59.088 [97/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:59.088 [98/740] Generating lib/rte_mbuf_def with a custom command 00:01:59.088 [99/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:59.088 [100/740] Generating lib/rte_mbuf_mingw with a custom command 00:01:59.088 [101/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:59.088 [102/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:59.349 [103/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:59.349 [104/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:59.349 [105/740] Linking static target lib/librte_rcu.a 00:01:59.349 [106/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:59.349 [107/740] Linking static target lib/librte_mempool.a 00:01:59.349 [108/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:59.349 [109/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:59.609 [110/740] Generating lib/rte_net_def with a custom command 00:01:59.609 [111/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:59.609 [112/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:59.609 [113/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:59.609 [114/740] Generating lib/rte_net_mingw with a custom command 00:01:59.609 [115/740] Generating lib/rte_meter_mingw with a custom command 00:01:59.609 [116/740] Generating lib/rte_meter_def with a custom command 00:01:59.609 [117/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:59.609 [118/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.609 [119/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:59.609 [120/740] Linking static target lib/librte_meter.a 00:01:59.609 [121/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:59.869 [122/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:59.869 [123/740] Linking static target lib/librte_net.a 00:01:59.869 [124/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.869 [125/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:59.869 [126/740] Linking static target lib/librte_mbuf.a 00:01:59.869 [127/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:59.869 [128/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:59.869 [129/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:59.869 [130/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.869 [131/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.137 [132/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:00.137 [133/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:00.409 [134/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.409 [135/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:00.409 [136/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:00.409 [137/740] Generating lib/rte_ethdev_def with a custom command 00:02:00.409 [138/740] Generating lib/rte_ethdev_mingw with a custom command 00:02:00.409 [139/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:00.409 [140/740] Generating lib/rte_pci_def with a custom command 00:02:00.409 [141/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:00.409 [142/740] Linking static target lib/librte_pci.a 00:02:00.409 [143/740] Generating lib/rte_pci_mingw with a custom command 00:02:00.669 [144/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:00.669 [145/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:00.669 [146/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:00.669 [147/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:00.669 [148/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.669 [149/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:00.669 [150/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:00.669 [151/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:00.669 [152/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:00.669 [153/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:00.928 [154/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:00.928 [155/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:00.928 [156/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:00.928 [157/740] Generating lib/rte_cmdline_def with a custom command 00:02:00.928 [158/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:00.928 [159/740] Generating lib/rte_cmdline_mingw with a custom command 00:02:00.928 [160/740] Generating lib/rte_metrics_def with a custom command 00:02:00.929 [161/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:00.929 [162/740] Generating lib/rte_metrics_mingw with a custom command 00:02:00.929 [163/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:00.929 [164/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:00.929 [165/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:00.929 [166/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:00.929 [167/740] Generating lib/rte_hash_def with a custom command 00:02:00.929 [168/740] Linking static target lib/librte_cmdline.a 00:02:00.929 [169/740] Generating lib/rte_hash_mingw with a custom command 00:02:00.929 [170/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:00.929 [171/740] Generating lib/rte_timer_def with a custom command 00:02:00.929 [172/740] Generating lib/rte_timer_mingw with a custom command 00:02:01.188 [173/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:01.188 [174/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:01.188 [175/740] Linking static target lib/librte_metrics.a 00:02:01.448 [176/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:01.448 [177/740] Linking static target lib/librte_timer.a 00:02:01.448 [178/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:01.448 [179/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.709 [180/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:01.709 [181/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.709 [182/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:01.709 [183/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.709 [184/740] Generating lib/rte_acl_def with a custom command 00:02:01.969 [185/740] Generating lib/rte_acl_mingw with a custom command 00:02:01.969 [186/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:01.969 [187/740] Generating lib/rte_bbdev_def with a custom command 00:02:01.969 [188/740] Generating lib/rte_bbdev_mingw with a custom command 00:02:01.969 [189/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:01.969 [190/740] Generating lib/rte_bitratestats_def with a custom command 00:02:01.969 [191/740] Generating lib/rte_bitratestats_mingw with a custom command 00:02:01.969 [192/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:01.969 [193/740] Linking static target lib/librte_ethdev.a 00:02:02.229 [194/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:02.229 [195/740] Linking static target lib/librte_bitratestats.a 00:02:02.229 [196/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:02.229 [197/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:02.489 [198/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:02.489 [199/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.489 [200/740] Linking static target lib/librte_bbdev.a 00:02:02.749 [201/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:02.749 [202/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:02.749 [203/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:03.009 [204/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.009 [205/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:03.009 [206/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:03.009 [207/740] Linking static target lib/librte_hash.a 00:02:03.009 [208/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:03.267 [209/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:03.267 [210/740] Generating lib/rte_bpf_def with a custom command 00:02:03.526 [211/740] Generating lib/rte_bpf_mingw with a custom command 00:02:03.526 [212/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:03.526 [213/740] Generating lib/rte_cfgfile_def with a custom command 00:02:03.526 [214/740] Generating lib/rte_cfgfile_mingw with a custom command 00:02:03.526 [215/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.526 [216/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:03.526 [217/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:03.526 [218/740] Linking static target lib/librte_cfgfile.a 00:02:03.785 [219/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:03.785 [220/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:03.785 [221/740] Generating lib/rte_compressdev_def with a custom command 00:02:03.785 [222/740] Generating lib/rte_compressdev_mingw with a custom command 00:02:03.785 [223/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:03.786 [224/740] Linking static target lib/librte_bpf.a 00:02:03.786 [225/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.786 [226/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:04.045 [227/740] Generating lib/rte_cryptodev_def with a custom command 00:02:04.045 [228/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:04.045 [229/740] Generating lib/rte_cryptodev_mingw with a custom command 00:02:04.045 [230/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:04.045 [231/740] Linking static target lib/librte_acl.a 00:02:04.046 [232/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:04.046 [233/740] Linking static target lib/librte_compressdev.a 00:02:04.046 [234/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:04.046 [235/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.046 [236/740] Generating lib/rte_distributor_def with a custom command 00:02:04.046 [237/740] Generating lib/rte_distributor_mingw with a custom command 00:02:04.306 [238/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.306 [239/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:04.306 [240/740] Generating lib/rte_efd_mingw with a custom command 00:02:04.306 [241/740] Generating lib/rte_efd_def with a custom command 00:02:04.306 [242/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.306 [243/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:04.306 [244/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:04.306 [245/740] Linking target lib/librte_eal.so.23.0 00:02:04.566 [246/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:04.566 [247/740] Linking target lib/librte_ring.so.23.0 00:02:04.566 [248/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:04.566 [249/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:04.566 [250/740] Linking target lib/librte_meter.so.23.0 00:02:04.566 [251/740] Linking target lib/librte_pci.so.23.0 00:02:04.566 [252/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:04.566 [253/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:04.826 [254/740] Linking target lib/librte_rcu.so.23.0 00:02:04.826 [255/740] Linking target lib/librte_timer.so.23.0 00:02:04.826 [256/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.826 [257/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:04.826 [258/740] Linking target lib/librte_mempool.so.23.0 00:02:04.826 [259/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:04.826 [260/740] Linking target lib/librte_cfgfile.so.23.0 00:02:04.826 [261/740] Linking target lib/librte_acl.so.23.0 00:02:04.826 [262/740] Linking static target lib/librte_distributor.a 00:02:04.826 [263/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:04.826 [264/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:04.826 [265/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:04.826 [266/740] Linking target lib/librte_mbuf.so.23.0 00:02:04.826 [267/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:05.086 [268/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:05.086 [269/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.086 [270/740] Linking target lib/librte_net.so.23.0 00:02:05.086 [271/740] Linking target lib/librte_bbdev.so.23.0 00:02:05.086 [272/740] Linking target lib/librte_compressdev.so.23.0 00:02:05.086 [273/740] Linking target lib/librte_distributor.so.23.0 00:02:05.086 [274/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:05.086 [275/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:05.086 [276/740] Linking target lib/librte_cmdline.so.23.0 00:02:05.086 [277/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:05.086 [278/740] Linking static target lib/librte_efd.a 00:02:05.086 [279/740] Generating lib/rte_eventdev_def with a custom command 00:02:05.086 [280/740] Linking target lib/librte_hash.so.23.0 00:02:05.346 [281/740] Generating lib/rte_eventdev_mingw with a custom command 00:02:05.346 [282/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:05.346 [283/740] Generating lib/rte_gpudev_def with a custom command 00:02:05.346 [284/740] Generating lib/rte_gpudev_mingw with a custom command 00:02:05.346 [285/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:05.346 [286/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.346 [287/740] Linking target lib/librte_efd.so.23.0 00:02:05.606 [288/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.606 [289/740] Linking target lib/librte_ethdev.so.23.0 00:02:05.606 [290/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:05.606 [291/740] Linking static target lib/librte_cryptodev.a 00:02:05.866 [292/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:05.866 [293/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:05.866 [294/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:05.866 [295/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:05.866 [296/740] Linking static target lib/librte_gpudev.a 00:02:05.866 [297/740] Linking target lib/librte_metrics.so.23.0 00:02:05.866 [298/740] Linking target lib/librte_bpf.so.23.0 00:02:05.866 [299/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:05.866 [300/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:05.866 [301/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:05.866 [302/740] Generating lib/rte_gro_def with a custom command 00:02:05.866 [303/740] Generating lib/rte_gro_mingw with a custom command 00:02:05.866 [304/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:05.866 [305/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:05.866 [306/740] Linking target lib/librte_bitratestats.so.23.0 00:02:06.126 [307/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:06.126 [308/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:06.387 [309/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:06.387 [310/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:06.387 [311/740] Generating lib/rte_gso_def with a custom command 00:02:06.387 [312/740] Generating lib/rte_gso_mingw with a custom command 00:02:06.387 [313/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:06.387 [314/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:06.387 [315/740] Linking static target lib/librte_gro.a 00:02:06.387 [316/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:06.387 [317/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.387 [318/740] Linking static target lib/librte_eventdev.a 00:02:06.387 [319/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:06.387 [320/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:06.387 [321/740] Linking target lib/librte_gpudev.so.23.0 00:02:06.387 [322/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.647 [323/740] Linking target lib/librte_gro.so.23.0 00:02:06.647 [324/740] Generating lib/rte_ip_frag_def with a custom command 00:02:06.647 [325/740] Generating lib/rte_ip_frag_mingw with a custom command 00:02:06.647 [326/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:06.647 [327/740] Linking static target lib/librte_gso.a 00:02:06.647 [328/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:06.647 [329/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:06.647 [330/740] Generating lib/rte_jobstats_def with a custom command 00:02:06.647 [331/740] Linking static target lib/librte_jobstats.a 00:02:06.647 [332/740] Generating lib/rte_jobstats_mingw with a custom command 00:02:06.647 [333/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.906 [334/740] Linking target lib/librte_gso.so.23.0 00:02:06.906 [335/740] Generating lib/rte_latencystats_def with a custom command 00:02:06.906 [336/740] Generating lib/rte_latencystats_mingw with a custom command 00:02:06.906 [337/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:06.906 [338/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:06.906 [339/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:06.906 [340/740] Generating lib/rte_lpm_def with a custom command 00:02:06.907 [341/740] Generating lib/rte_lpm_mingw with a custom command 00:02:06.907 [342/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.907 [343/740] Linking target lib/librte_jobstats.so.23.0 00:02:06.907 [344/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:06.907 [345/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:06.907 [346/740] Linking static target lib/librte_ip_frag.a 00:02:07.167 [347/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:07.167 [348/740] Linking static target lib/librte_latencystats.a 00:02:07.167 [349/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.428 [350/740] Linking target lib/librte_ip_frag.so.23.0 00:02:07.428 [351/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:07.428 [352/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:07.428 [353/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:07.428 [354/740] Generating lib/rte_member_def with a custom command 00:02:07.428 [355/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:07.428 [356/740] Generating lib/rte_member_mingw with a custom command 00:02:07.428 [357/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.428 [358/740] Generating lib/rte_pcapng_def with a custom command 00:02:07.428 [359/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:07.428 [360/740] Generating lib/rte_pcapng_mingw with a custom command 00:02:07.428 [361/740] Linking target lib/librte_latencystats.so.23.0 00:02:07.428 [362/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.428 [363/740] Linking target lib/librte_cryptodev.so.23.0 00:02:07.428 [364/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:07.428 [365/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:07.687 [366/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:07.687 [367/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:07.687 [368/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:07.687 [369/740] Linking static target lib/librte_lpm.a 00:02:07.687 [370/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:07.687 [371/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:07.944 [372/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:07.944 [373/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:07.944 [374/740] Generating lib/rte_power_def with a custom command 00:02:07.944 [375/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.944 [376/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:07.944 [377/740] Generating lib/rte_power_mingw with a custom command 00:02:07.944 [378/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.944 [379/740] Linking target lib/librte_lpm.so.23.0 00:02:07.944 [380/740] Generating lib/rte_rawdev_def with a custom command 00:02:07.944 [381/740] Generating lib/rte_rawdev_mingw with a custom command 00:02:07.944 [382/740] Linking target lib/librte_eventdev.so.23.0 00:02:07.944 [383/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:07.944 [384/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:07.944 [385/740] Generating lib/rte_regexdev_def with a custom command 00:02:07.944 [386/740] Generating lib/rte_regexdev_mingw with a custom command 00:02:07.944 [387/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:07.944 [388/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:07.944 [389/740] Linking static target lib/librte_pcapng.a 00:02:07.944 [390/740] Generating lib/rte_dmadev_def with a custom command 00:02:07.944 [391/740] Generating lib/rte_dmadev_mingw with a custom command 00:02:08.203 [392/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:08.203 [393/740] Linking static target lib/librte_rawdev.a 00:02:08.203 [394/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:08.203 [395/740] Generating lib/rte_rib_def with a custom command 00:02:08.203 [396/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.203 [397/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:08.203 [398/740] Generating lib/rte_rib_mingw with a custom command 00:02:08.203 [399/740] Generating lib/rte_reorder_def with a custom command 00:02:08.203 [400/740] Linking target lib/librte_pcapng.so.23.0 00:02:08.203 [401/740] Generating lib/rte_reorder_mingw with a custom command 00:02:08.204 [402/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:08.204 [403/740] Linking static target lib/librte_power.a 00:02:08.463 [404/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:08.463 [405/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:08.463 [406/740] Linking static target lib/librte_dmadev.a 00:02:08.463 [407/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:08.463 [408/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:08.463 [409/740] Linking static target lib/librte_regexdev.a 00:02:08.463 [410/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.463 [411/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:08.463 [412/740] Linking static target lib/librte_member.a 00:02:08.463 [413/740] Linking target lib/librte_rawdev.so.23.0 00:02:08.722 [414/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:08.722 [415/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:08.722 [416/740] Generating lib/rte_sched_def with a custom command 00:02:08.722 [417/740] Generating lib/rte_sched_mingw with a custom command 00:02:08.722 [418/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:08.722 [419/740] Generating lib/rte_security_def with a custom command 00:02:08.722 [420/740] Generating lib/rte_security_mingw with a custom command 00:02:08.722 [421/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:08.722 [422/740] Linking static target lib/librte_reorder.a 00:02:08.722 [423/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:08.722 [424/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:08.722 [425/740] Generating lib/rte_stack_def with a custom command 00:02:08.722 [426/740] Generating lib/rte_stack_mingw with a custom command 00:02:08.722 [427/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.722 [428/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:08.722 [429/740] Linking static target lib/librte_rib.a 00:02:08.722 [430/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.722 [431/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:08.722 [432/740] Linking target lib/librte_dmadev.so.23.0 00:02:08.982 [433/740] Linking static target lib/librte_stack.a 00:02:08.982 [434/740] Linking target lib/librte_member.so.23.0 00:02:08.982 [435/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:08.982 [436/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.982 [437/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:08.982 [438/740] Linking target lib/librte_reorder.so.23.0 00:02:08.982 [439/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.982 [440/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.982 [441/740] Linking target lib/librte_stack.so.23.0 00:02:08.982 [442/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.982 [443/740] Linking target lib/librte_power.so.23.0 00:02:08.982 [444/740] Linking target lib/librte_regexdev.so.23.0 00:02:09.241 [445/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:09.241 [446/740] Linking static target lib/librte_security.a 00:02:09.241 [447/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.241 [448/740] Linking target lib/librte_rib.so.23.0 00:02:09.241 [449/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:09.241 [450/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:09.241 [451/740] Generating lib/rte_vhost_def with a custom command 00:02:09.241 [452/740] Generating lib/rte_vhost_mingw with a custom command 00:02:09.501 [453/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:09.501 [454/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.501 [455/740] Linking target lib/librte_security.so.23.0 00:02:09.501 [456/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:09.501 [457/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:09.761 [458/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:09.761 [459/740] Linking static target lib/librte_sched.a 00:02:09.761 [460/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:10.020 [461/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:10.020 [462/740] Generating lib/rte_ipsec_def with a custom command 00:02:10.020 [463/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:10.020 [464/740] Generating lib/rte_ipsec_mingw with a custom command 00:02:10.020 [465/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.020 [466/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:10.020 [467/740] Linking target lib/librte_sched.so.23.0 00:02:10.020 [468/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:10.279 [469/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:10.279 [470/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:10.279 [471/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:10.279 [472/740] Generating lib/rte_fib_def with a custom command 00:02:10.279 [473/740] Generating lib/rte_fib_mingw with a custom command 00:02:10.279 [474/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:10.537 [475/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:10.537 [476/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:10.794 [477/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:10.794 [478/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:10.794 [479/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:10.794 [480/740] Linking static target lib/librte_ipsec.a 00:02:10.794 [481/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:10.794 [482/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:11.052 [483/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:11.052 [484/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.052 [485/740] Linking static target lib/librte_fib.a 00:02:11.052 [486/740] Linking target lib/librte_ipsec.so.23.0 00:02:11.052 [487/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:11.052 [488/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:11.311 [489/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:11.311 [490/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.311 [491/740] Linking target lib/librte_fib.so.23.0 00:02:11.569 [492/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:11.569 [493/740] Generating lib/rte_port_def with a custom command 00:02:11.569 [494/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:11.569 [495/740] Generating lib/rte_port_mingw with a custom command 00:02:11.569 [496/740] Generating lib/rte_pdump_def with a custom command 00:02:11.569 [497/740] Generating lib/rte_pdump_mingw with a custom command 00:02:11.569 [498/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:11.828 [499/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:11.828 [500/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:11.828 [501/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:11.828 [502/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:11.828 [503/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:12.088 [504/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:12.088 [505/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:12.088 [506/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:12.348 [507/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:12.348 [508/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:12.348 [509/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:12.348 [510/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:12.348 [511/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:12.348 [512/740] Linking static target lib/librte_port.a 00:02:12.348 [513/740] Linking static target lib/librte_pdump.a 00:02:12.348 [514/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:12.608 [515/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.608 [516/740] Linking target lib/librte_pdump.so.23.0 00:02:12.868 [517/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.868 [518/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:12.868 [519/740] Linking target lib/librte_port.so.23.0 00:02:12.868 [520/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:12.868 [521/740] Generating lib/rte_table_def with a custom command 00:02:12.868 [522/740] Generating lib/rte_table_mingw with a custom command 00:02:12.868 [523/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:12.868 [524/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:13.129 [525/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:13.129 [526/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:13.129 [527/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:13.129 [528/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:13.129 [529/740] Generating lib/rte_pipeline_def with a custom command 00:02:13.129 [530/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:13.129 [531/740] Generating lib/rte_pipeline_mingw with a custom command 00:02:13.129 [532/740] Linking static target lib/librte_table.a 00:02:13.389 [533/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:13.649 [534/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:13.649 [535/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:13.649 [536/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.649 [537/740] Linking target lib/librte_table.so.23.0 00:02:13.649 [538/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:13.649 [539/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:13.908 [540/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:13.908 [541/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:13.908 [542/740] Generating lib/rte_graph_def with a custom command 00:02:13.908 [543/740] Generating lib/rte_graph_mingw with a custom command 00:02:14.168 [544/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:14.168 [545/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:14.168 [546/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:14.168 [547/740] Linking static target lib/librte_graph.a 00:02:14.168 [548/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:14.428 [549/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:14.428 [550/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:14.428 [551/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:14.688 [552/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:14.688 [553/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:14.688 [554/740] Generating lib/rte_node_def with a custom command 00:02:14.688 [555/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.688 [556/740] Generating lib/rte_node_mingw with a custom command 00:02:14.688 [557/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:14.688 [558/740] Linking target lib/librte_graph.so.23.0 00:02:14.688 [559/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:14.948 [560/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:14.948 [561/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:14.948 [562/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:14.948 [563/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:14.948 [564/740] Generating drivers/rte_bus_pci_def with a custom command 00:02:14.948 [565/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:14.948 [566/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:14.948 [567/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:14.948 [568/740] Generating drivers/rte_bus_vdev_def with a custom command 00:02:14.948 [569/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:14.948 [570/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:14.948 [571/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:15.209 [572/740] Generating drivers/rte_mempool_ring_def with a custom command 00:02:15.209 [573/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:15.209 [574/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:15.209 [575/740] Linking static target lib/librte_node.a 00:02:15.209 [576/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:15.209 [577/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:15.209 [578/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:15.209 [579/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:15.209 [580/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:15.469 [581/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.469 [582/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:15.469 [583/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:15.469 [584/740] Linking target lib/librte_node.so.23.0 00:02:15.469 [585/740] Linking static target drivers/librte_bus_vdev.a 00:02:15.469 [586/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:15.469 [587/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:15.469 [588/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:15.469 [589/740] Linking static target drivers/librte_bus_pci.a 00:02:15.469 [590/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.469 [591/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:15.729 [592/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:15.729 [593/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:15.729 [594/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:15.729 [595/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:15.729 [596/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:15.729 [597/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.729 [598/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:15.989 [599/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:15.989 [600/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:15.989 [601/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:15.989 [602/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:15.989 [603/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:15.989 [604/740] Linking static target drivers/librte_mempool_ring.a 00:02:15.989 [605/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:15.989 [606/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:15.989 [607/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:16.248 [608/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:16.509 [609/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:16.768 [610/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:16.768 [611/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:16.768 [612/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:17.339 [613/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:17.339 [614/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:17.339 [615/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:17.339 [616/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:17.339 [617/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:17.597 [618/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:17.597 [619/740] Generating drivers/rte_net_i40e_def with a custom command 00:02:17.597 [620/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:17.855 [621/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:18.113 [622/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:18.372 [623/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:18.630 [624/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:18.630 [625/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:18.630 [626/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:18.630 [627/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:18.889 [628/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:18.889 [629/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:18.889 [630/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:18.889 [631/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:18.889 [632/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:19.147 [633/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:19.405 [634/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:19.405 [635/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:19.405 [636/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:19.405 [637/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:19.663 [638/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:19.663 [639/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:19.663 [640/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:19.663 [641/740] Linking static target drivers/librte_net_i40e.a 00:02:19.663 [642/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:19.663 [643/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:19.921 [644/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:19.921 [645/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:19.921 [646/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:19.921 [647/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:20.179 [648/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:20.179 [649/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.179 [650/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:20.179 [651/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:20.443 [652/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:20.715 [653/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:20.715 [654/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:20.715 [655/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:20.715 [656/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:20.715 [657/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:20.715 [658/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:20.715 [659/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:20.991 [660/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:20.991 [661/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:20.991 [662/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:20.991 [663/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:21.262 [664/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:21.262 [665/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:21.522 [666/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:21.782 [667/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:21.782 [668/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:21.782 [669/740] Linking static target lib/librte_vhost.a 00:02:21.782 [670/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:21.782 [671/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:22.042 [672/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:22.302 [673/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:22.302 [674/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:22.302 [675/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:22.302 [676/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:22.302 [677/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:22.562 [678/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:22.562 [679/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:22.562 [680/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:22.562 [681/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:22.822 [682/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.822 [683/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:22.822 [684/740] Linking target lib/librte_vhost.so.23.0 00:02:22.822 [685/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:22.822 [686/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:22.822 [687/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:23.082 [688/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:23.082 [689/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:23.082 [690/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:23.082 [691/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:23.341 [692/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:23.341 [693/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:23.341 [694/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:23.602 [695/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:23.602 [696/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:23.861 [697/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:24.122 [698/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:24.122 [699/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:24.122 [700/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:24.122 [701/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:24.382 [702/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:24.382 [703/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:24.642 [704/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:24.642 [705/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:24.642 [706/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:24.902 [707/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:24.902 [708/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:25.161 [709/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:25.420 [710/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:25.420 [711/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:25.420 [712/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:25.420 [713/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:25.680 [714/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:25.680 [715/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:25.680 [716/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:25.680 [717/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:25.680 [718/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:26.249 [719/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:28.155 [720/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:28.155 [721/740] Linking static target lib/librte_pipeline.a 00:02:28.414 [722/740] Linking target app/dpdk-test-acl 00:02:28.414 [723/740] Linking target app/dpdk-pdump 00:02:28.414 [724/740] Linking target app/dpdk-test-bbdev 00:02:28.672 [725/740] Linking target app/dpdk-test-eventdev 00:02:28.672 [726/740] Linking target app/dpdk-test-compress-perf 00:02:28.672 [727/740] Linking target app/dpdk-test-cmdline 00:02:28.672 [728/740] Linking target app/dpdk-dumpcap 00:02:28.672 [729/740] Linking target app/dpdk-test-crypto-perf 00:02:28.672 [730/740] Linking target app/dpdk-proc-info 00:02:28.931 [731/740] Linking target app/dpdk-test-fib 00:02:28.931 [732/740] Linking target app/dpdk-test-flow-perf 00:02:28.931 [733/740] Linking target app/dpdk-test-pipeline 00:02:28.931 [734/740] Linking target app/dpdk-test-gpudev 00:02:28.931 [735/740] Linking target app/dpdk-testpmd 00:02:28.931 [736/740] Linking target app/dpdk-test-security-perf 00:02:28.931 [737/740] Linking target app/dpdk-test-regex 00:02:28.931 [738/740] Linking target app/dpdk-test-sad 00:02:33.131 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.131 [740/740] Linking target lib/librte_pipeline.so.23.0 00:02:33.131 11:45:30 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:02:33.131 11:45:30 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:33.131 11:45:30 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:33.391 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:33.391 [0/1] Installing files. 00:02:33.652 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:33.652 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:33.652 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:33.652 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:33.652 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:33.654 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:33.655 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:33.656 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:33.657 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:33.657 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.657 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.657 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.657 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.657 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.657 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.657 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.657 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.657 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.657 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.657 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.657 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.657 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.657 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.657 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.919 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.920 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.920 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.920 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.920 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.920 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.920 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.920 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.920 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.920 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.920 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.920 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.920 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.920 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.920 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.920 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.920 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.920 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.920 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.920 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.920 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.920 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.920 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.920 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.920 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:33.920 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.920 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:33.920 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.920 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:33.920 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:33.920 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:33.920 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:33.920 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:33.920 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:33.920 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:33.920 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:33.920 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:33.920 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:33.920 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:33.920 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:33.920 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:33.920 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:33.920 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:33.920 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:33.920 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:33.920 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:33.920 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:33.920 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.920 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.921 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.922 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.923 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.923 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.923 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.923 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.923 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.923 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.923 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.923 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.923 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.923 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.923 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.923 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.923 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.923 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.923 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.923 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.923 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.923 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.923 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.923 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.923 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:33.923 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:33.923 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:33.923 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:33.923 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:33.923 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:33.923 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:33.923 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:02:33.923 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:33.923 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:02:33.923 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:33.923 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:02:33.923 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:33.923 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:02:33.923 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:33.923 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:02:33.923 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:33.923 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:02:33.923 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:33.923 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:02:33.923 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:33.923 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:02:33.923 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:33.923 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:02:33.923 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:33.923 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:02:33.923 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:33.923 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:02:33.923 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:33.923 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:02:33.923 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:33.923 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:02:33.923 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:33.923 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:02:33.923 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:33.923 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:02:33.923 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:33.923 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:02:33.923 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:33.923 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:02:33.923 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:33.923 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:02:33.923 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:33.923 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:02:33.923 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:33.923 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:02:33.923 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:33.923 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:02:33.923 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:33.923 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:02:33.923 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:33.923 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:02:33.923 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:33.923 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:02:33.923 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:33.923 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:02:33.923 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:33.923 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:02:33.923 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:33.923 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:33.923 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:33.923 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:33.923 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:33.923 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:33.923 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:33.923 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:33.923 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:33.923 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:33.923 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:33.923 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:33.923 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:33.923 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:02:33.923 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:33.923 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:02:33.923 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:33.923 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:02:33.923 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:33.923 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:02:33.923 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:33.923 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:02:33.923 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:33.923 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:02:33.923 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:33.923 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:02:33.923 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:33.923 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:02:33.923 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:33.923 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:02:33.924 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:33.924 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:02:33.924 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:33.924 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:02:33.924 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:33.924 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:02:33.924 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:33.924 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:02:33.924 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:33.924 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:02:33.924 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:33.924 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:02:33.924 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:33.924 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:02:33.924 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:33.924 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:02:33.924 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:33.924 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:02:33.924 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:33.924 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:02:33.924 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:33.924 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:02:33.924 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:33.924 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:02:33.924 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:33.924 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:02:33.924 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:33.924 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:02:33.924 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:33.924 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:02:33.924 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:33.924 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:02:33.924 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:33.924 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:02:33.924 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:33.924 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:33.924 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:33.924 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:33.924 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:33.924 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:33.924 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:33.924 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:33.924 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:33.924 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:34.184 11:45:31 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:02:34.184 11:45:31 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:34.184 00:02:34.184 real 0m43.633s 00:02:34.184 user 4m7.559s 00:02:34.184 sys 0m48.145s 00:02:34.184 11:45:31 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:34.184 11:45:31 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:34.184 ************************************ 00:02:34.184 END TEST build_native_dpdk 00:02:34.184 ************************************ 00:02:34.184 11:45:31 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:34.184 11:45:31 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:34.184 11:45:31 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:34.184 11:45:31 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:34.184 11:45:31 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:34.184 11:45:31 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:34.184 11:45:31 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:34.184 11:45:31 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:02:34.184 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:34.444 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:34.444 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:34.444 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:34.705 Using 'verbs' RDMA provider 00:02:50.560 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:05.476 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:05.735 Creating mk/config.mk...done. 00:03:05.735 Creating mk/cc.flags.mk...done. 00:03:05.735 Type 'make' to build. 00:03:05.735 11:46:03 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:05.735 11:46:03 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:05.736 11:46:03 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:05.736 11:46:03 -- common/autotest_common.sh@10 -- $ set +x 00:03:05.736 ************************************ 00:03:05.736 START TEST make 00:03:05.736 ************************************ 00:03:05.736 11:46:03 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:06.305 make[1]: Nothing to be done for 'all'. 00:03:52.996 CC lib/ut_mock/mock.o 00:03:52.996 CC lib/log/log.o 00:03:52.996 CC lib/ut/ut.o 00:03:52.996 CC lib/log/log_flags.o 00:03:52.996 CC lib/log/log_deprecated.o 00:03:52.996 LIB libspdk_log.a 00:03:52.996 LIB libspdk_ut.a 00:03:52.996 LIB libspdk_ut_mock.a 00:03:52.996 SO libspdk_log.so.7.0 00:03:52.996 SO libspdk_ut_mock.so.6.0 00:03:52.996 SO libspdk_ut.so.2.0 00:03:52.996 SYMLINK libspdk_log.so 00:03:52.996 SYMLINK libspdk_ut_mock.so 00:03:52.996 SYMLINK libspdk_ut.so 00:03:52.996 CXX lib/trace_parser/trace.o 00:03:52.996 CC lib/dma/dma.o 00:03:52.996 CC lib/util/base64.o 00:03:52.996 CC lib/util/bit_array.o 00:03:52.996 CC lib/util/crc32.o 00:03:52.996 CC lib/util/cpuset.o 00:03:52.996 CC lib/util/crc16.o 00:03:52.996 CC lib/util/crc32c.o 00:03:52.996 CC lib/ioat/ioat.o 00:03:52.996 CC lib/vfio_user/host/vfio_user_pci.o 00:03:52.996 CC lib/util/crc32_ieee.o 00:03:52.996 CC lib/util/crc64.o 00:03:52.996 CC lib/util/dif.o 00:03:52.996 LIB libspdk_dma.a 00:03:52.996 SO libspdk_dma.so.5.0 00:03:52.996 CC lib/vfio_user/host/vfio_user.o 00:03:52.996 CC lib/util/fd.o 00:03:52.996 CC lib/util/fd_group.o 00:03:52.996 CC lib/util/file.o 00:03:52.996 SYMLINK libspdk_dma.so 00:03:52.996 CC lib/util/hexlify.o 00:03:52.996 CC lib/util/iov.o 00:03:52.996 CC lib/util/math.o 00:03:52.996 CC lib/util/net.o 00:03:52.996 LIB libspdk_ioat.a 00:03:52.996 SO libspdk_ioat.so.7.0 00:03:52.996 CC lib/util/pipe.o 00:03:52.996 CC lib/util/strerror_tls.o 00:03:52.996 LIB libspdk_vfio_user.a 00:03:52.996 SYMLINK libspdk_ioat.so 00:03:52.996 CC lib/util/string.o 00:03:52.996 CC lib/util/uuid.o 00:03:52.996 SO libspdk_vfio_user.so.5.0 00:03:52.996 CC lib/util/xor.o 00:03:52.996 CC lib/util/zipf.o 00:03:52.996 SYMLINK libspdk_vfio_user.so 00:03:52.996 CC lib/util/md5.o 00:03:52.996 LIB libspdk_util.a 00:03:52.996 SO libspdk_util.so.10.0 00:03:52.996 LIB libspdk_trace_parser.a 00:03:52.996 SO libspdk_trace_parser.so.6.0 00:03:52.996 SYMLINK libspdk_util.so 00:03:52.996 SYMLINK libspdk_trace_parser.so 00:03:52.996 CC lib/rdma_provider/common.o 00:03:52.996 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:52.996 CC lib/json/json_parse.o 00:03:52.996 CC lib/json/json_util.o 00:03:52.996 CC lib/json/json_write.o 00:03:52.996 CC lib/conf/conf.o 00:03:52.996 CC lib/env_dpdk/env.o 00:03:52.996 CC lib/vmd/vmd.o 00:03:52.996 CC lib/idxd/idxd.o 00:03:52.996 CC lib/rdma_utils/rdma_utils.o 00:03:52.996 CC lib/idxd/idxd_user.o 00:03:52.996 LIB libspdk_rdma_provider.a 00:03:52.996 SO libspdk_rdma_provider.so.6.0 00:03:52.996 LIB libspdk_conf.a 00:03:52.996 SO libspdk_conf.so.6.0 00:03:52.996 CC lib/vmd/led.o 00:03:52.996 CC lib/idxd/idxd_kernel.o 00:03:52.996 SYMLINK libspdk_rdma_provider.so 00:03:52.996 CC lib/env_dpdk/memory.o 00:03:52.996 LIB libspdk_rdma_utils.a 00:03:52.996 LIB libspdk_json.a 00:03:52.996 SYMLINK libspdk_conf.so 00:03:52.996 CC lib/env_dpdk/pci.o 00:03:52.996 SO libspdk_rdma_utils.so.1.0 00:03:52.996 SO libspdk_json.so.6.0 00:03:52.996 SYMLINK libspdk_rdma_utils.so 00:03:52.996 CC lib/env_dpdk/init.o 00:03:52.996 SYMLINK libspdk_json.so 00:03:52.996 CC lib/env_dpdk/threads.o 00:03:52.996 CC lib/env_dpdk/pci_ioat.o 00:03:52.996 CC lib/env_dpdk/pci_virtio.o 00:03:52.996 CC lib/env_dpdk/pci_vmd.o 00:03:52.996 CC lib/env_dpdk/pci_idxd.o 00:03:52.996 CC lib/env_dpdk/pci_event.o 00:03:52.996 CC lib/env_dpdk/sigbus_handler.o 00:03:52.996 CC lib/env_dpdk/pci_dpdk.o 00:03:52.996 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:52.996 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:52.996 LIB libspdk_idxd.a 00:03:52.996 LIB libspdk_vmd.a 00:03:52.996 SO libspdk_idxd.so.12.1 00:03:52.996 SO libspdk_vmd.so.6.0 00:03:52.996 SYMLINK libspdk_idxd.so 00:03:52.996 SYMLINK libspdk_vmd.so 00:03:52.996 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:52.996 CC lib/jsonrpc/jsonrpc_server.o 00:03:52.996 CC lib/jsonrpc/jsonrpc_client.o 00:03:52.996 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:52.996 LIB libspdk_jsonrpc.a 00:03:52.996 SO libspdk_jsonrpc.so.6.0 00:03:52.996 SYMLINK libspdk_jsonrpc.so 00:03:52.996 LIB libspdk_env_dpdk.a 00:03:52.996 SO libspdk_env_dpdk.so.15.0 00:03:52.996 CC lib/rpc/rpc.o 00:03:52.996 SYMLINK libspdk_env_dpdk.so 00:03:52.996 LIB libspdk_rpc.a 00:03:52.996 SO libspdk_rpc.so.6.0 00:03:52.996 SYMLINK libspdk_rpc.so 00:03:52.996 CC lib/notify/notify.o 00:03:52.996 CC lib/notify/notify_rpc.o 00:03:52.996 CC lib/trace/trace_flags.o 00:03:52.996 CC lib/trace/trace.o 00:03:52.996 CC lib/trace/trace_rpc.o 00:03:52.996 CC lib/keyring/keyring.o 00:03:52.996 CC lib/keyring/keyring_rpc.o 00:03:52.996 LIB libspdk_notify.a 00:03:52.996 SO libspdk_notify.so.6.0 00:03:52.996 LIB libspdk_keyring.a 00:03:52.996 LIB libspdk_trace.a 00:03:52.996 SYMLINK libspdk_notify.so 00:03:52.996 SO libspdk_keyring.so.2.0 00:03:52.996 SO libspdk_trace.so.11.0 00:03:52.996 SYMLINK libspdk_keyring.so 00:03:52.996 SYMLINK libspdk_trace.so 00:03:52.996 CC lib/sock/sock.o 00:03:52.996 CC lib/sock/sock_rpc.o 00:03:52.996 CC lib/thread/thread.o 00:03:52.996 CC lib/thread/iobuf.o 00:03:52.996 LIB libspdk_sock.a 00:03:52.996 SO libspdk_sock.so.10.0 00:03:52.996 SYMLINK libspdk_sock.so 00:03:52.996 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:52.996 CC lib/nvme/nvme_ctrlr.o 00:03:52.996 CC lib/nvme/nvme_fabric.o 00:03:52.996 CC lib/nvme/nvme_ns_cmd.o 00:03:52.996 CC lib/nvme/nvme_pcie_common.o 00:03:52.996 CC lib/nvme/nvme_ns.o 00:03:52.996 CC lib/nvme/nvme_qpair.o 00:03:52.996 CC lib/nvme/nvme_pcie.o 00:03:52.996 CC lib/nvme/nvme.o 00:03:53.257 LIB libspdk_thread.a 00:03:53.515 SO libspdk_thread.so.10.1 00:03:53.515 SYMLINK libspdk_thread.so 00:03:53.515 CC lib/nvme/nvme_quirks.o 00:03:53.515 CC lib/nvme/nvme_transport.o 00:03:53.515 CC lib/nvme/nvme_discovery.o 00:03:53.515 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:53.515 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:53.515 CC lib/nvme/nvme_tcp.o 00:03:53.797 CC lib/nvme/nvme_opal.o 00:03:53.797 CC lib/nvme/nvme_io_msg.o 00:03:53.797 CC lib/nvme/nvme_poll_group.o 00:03:54.128 CC lib/nvme/nvme_zns.o 00:03:54.128 CC lib/nvme/nvme_stubs.o 00:03:54.128 CC lib/nvme/nvme_auth.o 00:03:54.128 CC lib/accel/accel.o 00:03:54.128 CC lib/accel/accel_rpc.o 00:03:54.128 CC lib/accel/accel_sw.o 00:03:54.396 CC lib/nvme/nvme_cuse.o 00:03:54.396 CC lib/nvme/nvme_rdma.o 00:03:54.656 CC lib/blob/blobstore.o 00:03:54.656 CC lib/blob/request.o 00:03:54.656 CC lib/init/json_config.o 00:03:54.656 CC lib/virtio/virtio.o 00:03:54.656 CC lib/init/subsystem.o 00:03:54.916 CC lib/init/subsystem_rpc.o 00:03:54.916 CC lib/virtio/virtio_vhost_user.o 00:03:54.916 CC lib/virtio/virtio_vfio_user.o 00:03:54.916 CC lib/init/rpc.o 00:03:54.916 CC lib/virtio/virtio_pci.o 00:03:55.177 CC lib/blob/zeroes.o 00:03:55.177 CC lib/fsdev/fsdev.o 00:03:55.177 CC lib/fsdev/fsdev_io.o 00:03:55.177 CC lib/blob/blob_bs_dev.o 00:03:55.177 LIB libspdk_init.a 00:03:55.177 SO libspdk_init.so.6.0 00:03:55.177 CC lib/fsdev/fsdev_rpc.o 00:03:55.177 SYMLINK libspdk_init.so 00:03:55.177 LIB libspdk_accel.a 00:03:55.438 SO libspdk_accel.so.16.0 00:03:55.438 LIB libspdk_virtio.a 00:03:55.438 SO libspdk_virtio.so.7.0 00:03:55.438 CC lib/event/app.o 00:03:55.438 CC lib/event/log_rpc.o 00:03:55.438 CC lib/event/reactor.o 00:03:55.438 CC lib/event/app_rpc.o 00:03:55.438 SYMLINK libspdk_accel.so 00:03:55.438 CC lib/event/scheduler_static.o 00:03:55.438 SYMLINK libspdk_virtio.so 00:03:55.698 CC lib/bdev/bdev.o 00:03:55.698 CC lib/bdev/bdev_rpc.o 00:03:55.698 CC lib/bdev/bdev_zone.o 00:03:55.698 CC lib/bdev/part.o 00:03:55.698 CC lib/bdev/scsi_nvme.o 00:03:55.698 LIB libspdk_fsdev.a 00:03:55.698 SO libspdk_fsdev.so.1.0 00:03:55.698 LIB libspdk_nvme.a 00:03:55.698 SYMLINK libspdk_fsdev.so 00:03:55.958 LIB libspdk_event.a 00:03:55.958 SO libspdk_nvme.so.14.0 00:03:55.958 SO libspdk_event.so.14.0 00:03:55.958 SYMLINK libspdk_event.so 00:03:55.958 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:56.219 SYMLINK libspdk_nvme.so 00:03:56.479 LIB libspdk_fuse_dispatcher.a 00:03:56.739 SO libspdk_fuse_dispatcher.so.1.0 00:03:56.739 SYMLINK libspdk_fuse_dispatcher.so 00:03:57.676 LIB libspdk_blob.a 00:03:57.676 SO libspdk_blob.so.11.0 00:03:57.935 SYMLINK libspdk_blob.so 00:03:58.194 CC lib/blobfs/blobfs.o 00:03:58.194 CC lib/blobfs/tree.o 00:03:58.194 CC lib/lvol/lvol.o 00:03:58.194 LIB libspdk_bdev.a 00:03:58.455 SO libspdk_bdev.so.16.0 00:03:58.455 SYMLINK libspdk_bdev.so 00:03:58.715 CC lib/scsi/dev.o 00:03:58.715 CC lib/scsi/scsi.o 00:03:58.715 CC lib/nvmf/ctrlr.o 00:03:58.715 CC lib/scsi/port.o 00:03:58.715 CC lib/ublk/ublk.o 00:03:58.715 CC lib/scsi/lun.o 00:03:58.715 CC lib/nbd/nbd.o 00:03:58.715 CC lib/ftl/ftl_core.o 00:03:58.715 CC lib/ftl/ftl_init.o 00:03:58.715 CC lib/nbd/nbd_rpc.o 00:03:58.975 CC lib/ftl/ftl_layout.o 00:03:58.975 CC lib/scsi/scsi_bdev.o 00:03:58.975 CC lib/scsi/scsi_pr.o 00:03:58.975 CC lib/ftl/ftl_debug.o 00:03:58.975 LIB libspdk_blobfs.a 00:03:58.975 SO libspdk_blobfs.so.10.0 00:03:58.975 CC lib/ftl/ftl_io.o 00:03:58.975 SYMLINK libspdk_blobfs.so 00:03:58.975 CC lib/ftl/ftl_sb.o 00:03:59.235 LIB libspdk_nbd.a 00:03:59.235 SO libspdk_nbd.so.7.0 00:03:59.235 LIB libspdk_lvol.a 00:03:59.235 CC lib/scsi/scsi_rpc.o 00:03:59.235 SYMLINK libspdk_nbd.so 00:03:59.235 CC lib/scsi/task.o 00:03:59.235 SO libspdk_lvol.so.10.0 00:03:59.235 SYMLINK libspdk_lvol.so 00:03:59.235 CC lib/ublk/ublk_rpc.o 00:03:59.235 CC lib/ftl/ftl_l2p.o 00:03:59.235 CC lib/ftl/ftl_l2p_flat.o 00:03:59.235 CC lib/ftl/ftl_nv_cache.o 00:03:59.235 CC lib/nvmf/ctrlr_discovery.o 00:03:59.235 CC lib/ftl/ftl_band.o 00:03:59.235 CC lib/ftl/ftl_band_ops.o 00:03:59.494 CC lib/ftl/ftl_writer.o 00:03:59.494 LIB libspdk_ublk.a 00:03:59.494 SO libspdk_ublk.so.3.0 00:03:59.494 CC lib/nvmf/ctrlr_bdev.o 00:03:59.494 CC lib/ftl/ftl_rq.o 00:03:59.494 LIB libspdk_scsi.a 00:03:59.494 SYMLINK libspdk_ublk.so 00:03:59.494 CC lib/ftl/ftl_reloc.o 00:03:59.494 SO libspdk_scsi.so.9.0 00:03:59.494 SYMLINK libspdk_scsi.so 00:03:59.494 CC lib/ftl/ftl_l2p_cache.o 00:03:59.753 CC lib/ftl/ftl_p2l.o 00:03:59.753 CC lib/ftl/ftl_p2l_log.o 00:03:59.753 CC lib/nvmf/subsystem.o 00:03:59.753 CC lib/ftl/mngt/ftl_mngt.o 00:03:59.753 CC lib/iscsi/conn.o 00:03:59.753 CC lib/vhost/vhost.o 00:04:00.013 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:00.013 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:00.013 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:00.013 CC lib/nvmf/nvmf.o 00:04:00.272 CC lib/nvmf/nvmf_rpc.o 00:04:00.272 CC lib/nvmf/transport.o 00:04:00.272 CC lib/nvmf/tcp.o 00:04:00.272 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:00.272 CC lib/nvmf/stubs.o 00:04:00.532 CC lib/iscsi/init_grp.o 00:04:00.532 CC lib/vhost/vhost_rpc.o 00:04:00.532 CC lib/iscsi/iscsi.o 00:04:00.532 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:00.791 CC lib/nvmf/mdns_server.o 00:04:00.791 CC lib/vhost/vhost_scsi.o 00:04:00.791 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:00.791 CC lib/nvmf/rdma.o 00:04:01.051 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:01.051 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:01.051 CC lib/nvmf/auth.o 00:04:01.051 CC lib/iscsi/param.o 00:04:01.051 CC lib/iscsi/portal_grp.o 00:04:01.051 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:01.051 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:01.311 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:01.311 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:01.311 CC lib/ftl/utils/ftl_conf.o 00:04:01.311 CC lib/iscsi/tgt_node.o 00:04:01.311 CC lib/vhost/vhost_blk.o 00:04:01.311 CC lib/ftl/utils/ftl_md.o 00:04:01.311 CC lib/ftl/utils/ftl_mempool.o 00:04:01.571 CC lib/vhost/rte_vhost_user.o 00:04:01.571 CC lib/ftl/utils/ftl_bitmap.o 00:04:01.571 CC lib/iscsi/iscsi_subsystem.o 00:04:01.831 CC lib/iscsi/iscsi_rpc.o 00:04:01.831 CC lib/iscsi/task.o 00:04:01.831 CC lib/ftl/utils/ftl_property.o 00:04:01.831 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:01.831 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:01.831 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:01.831 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:02.092 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:02.092 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:02.092 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:02.092 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:02.092 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:02.092 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:02.092 LIB libspdk_iscsi.a 00:04:02.092 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:02.352 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:02.352 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:02.352 CC lib/ftl/base/ftl_base_dev.o 00:04:02.352 SO libspdk_iscsi.so.8.0 00:04:02.352 CC lib/ftl/base/ftl_base_bdev.o 00:04:02.352 CC lib/ftl/ftl_trace.o 00:04:02.352 SYMLINK libspdk_iscsi.so 00:04:02.352 LIB libspdk_vhost.a 00:04:02.612 SO libspdk_vhost.so.8.0 00:04:02.612 LIB libspdk_ftl.a 00:04:02.612 SYMLINK libspdk_vhost.so 00:04:02.872 SO libspdk_ftl.so.9.0 00:04:02.872 SYMLINK libspdk_ftl.so 00:04:03.132 LIB libspdk_nvmf.a 00:04:03.132 SO libspdk_nvmf.so.19.0 00:04:03.392 SYMLINK libspdk_nvmf.so 00:04:03.962 CC module/env_dpdk/env_dpdk_rpc.o 00:04:03.962 CC module/sock/posix/posix.o 00:04:03.962 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:03.962 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:03.962 CC module/fsdev/aio/fsdev_aio.o 00:04:03.962 CC module/keyring/file/keyring.o 00:04:03.962 CC module/blob/bdev/blob_bdev.o 00:04:03.962 CC module/accel/error/accel_error.o 00:04:03.962 CC module/keyring/linux/keyring.o 00:04:03.962 CC module/accel/ioat/accel_ioat.o 00:04:03.962 LIB libspdk_env_dpdk_rpc.a 00:04:03.962 SO libspdk_env_dpdk_rpc.so.6.0 00:04:03.962 SYMLINK libspdk_env_dpdk_rpc.so 00:04:03.962 CC module/keyring/file/keyring_rpc.o 00:04:03.962 CC module/keyring/linux/keyring_rpc.o 00:04:03.962 LIB libspdk_scheduler_dpdk_governor.a 00:04:03.962 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:04.222 CC module/accel/error/accel_error_rpc.o 00:04:04.222 LIB libspdk_scheduler_dynamic.a 00:04:04.222 CC module/accel/ioat/accel_ioat_rpc.o 00:04:04.222 SO libspdk_scheduler_dynamic.so.4.0 00:04:04.222 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:04.222 LIB libspdk_keyring_linux.a 00:04:04.222 LIB libspdk_blob_bdev.a 00:04:04.222 LIB libspdk_keyring_file.a 00:04:04.222 SYMLINK libspdk_scheduler_dynamic.so 00:04:04.222 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:04.222 SO libspdk_keyring_linux.so.1.0 00:04:04.222 CC module/accel/dsa/accel_dsa.o 00:04:04.222 SO libspdk_blob_bdev.so.11.0 00:04:04.222 SO libspdk_keyring_file.so.2.0 00:04:04.222 LIB libspdk_accel_error.a 00:04:04.222 LIB libspdk_accel_ioat.a 00:04:04.222 SO libspdk_accel_error.so.2.0 00:04:04.222 SYMLINK libspdk_keyring_linux.so 00:04:04.222 SYMLINK libspdk_keyring_file.so 00:04:04.222 SYMLINK libspdk_blob_bdev.so 00:04:04.222 SO libspdk_accel_ioat.so.6.0 00:04:04.222 CC module/fsdev/aio/linux_aio_mgr.o 00:04:04.222 CC module/scheduler/gscheduler/gscheduler.o 00:04:04.222 SYMLINK libspdk_accel_ioat.so 00:04:04.222 SYMLINK libspdk_accel_error.so 00:04:04.222 CC module/accel/dsa/accel_dsa_rpc.o 00:04:04.482 LIB libspdk_scheduler_gscheduler.a 00:04:04.482 SO libspdk_scheduler_gscheduler.so.4.0 00:04:04.482 LIB libspdk_accel_dsa.a 00:04:04.482 CC module/accel/iaa/accel_iaa.o 00:04:04.482 SO libspdk_accel_dsa.so.5.0 00:04:04.482 CC module/bdev/error/vbdev_error.o 00:04:04.482 CC module/bdev/delay/vbdev_delay.o 00:04:04.482 SYMLINK libspdk_scheduler_gscheduler.so 00:04:04.482 CC module/blobfs/bdev/blobfs_bdev.o 00:04:04.482 SYMLINK libspdk_accel_dsa.so 00:04:04.482 LIB libspdk_fsdev_aio.a 00:04:04.482 CC module/bdev/lvol/vbdev_lvol.o 00:04:04.482 CC module/bdev/gpt/gpt.o 00:04:04.741 SO libspdk_fsdev_aio.so.1.0 00:04:04.741 SYMLINK libspdk_fsdev_aio.so 00:04:04.741 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:04.741 CC module/accel/iaa/accel_iaa_rpc.o 00:04:04.741 LIB libspdk_sock_posix.a 00:04:04.741 CC module/bdev/malloc/bdev_malloc.o 00:04:04.741 CC module/bdev/null/bdev_null.o 00:04:04.741 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:04.741 SO libspdk_sock_posix.so.6.0 00:04:04.741 CC module/bdev/error/vbdev_error_rpc.o 00:04:04.741 CC module/bdev/gpt/vbdev_gpt.o 00:04:04.741 SYMLINK libspdk_sock_posix.so 00:04:04.741 LIB libspdk_accel_iaa.a 00:04:04.741 SO libspdk_accel_iaa.so.3.0 00:04:04.741 LIB libspdk_blobfs_bdev.a 00:04:05.000 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:05.000 SO libspdk_blobfs_bdev.so.6.0 00:04:05.000 SYMLINK libspdk_accel_iaa.so 00:04:05.000 LIB libspdk_bdev_error.a 00:04:05.000 CC module/bdev/null/bdev_null_rpc.o 00:04:05.000 SO libspdk_bdev_error.so.6.0 00:04:05.000 CC module/bdev/nvme/bdev_nvme.o 00:04:05.000 SYMLINK libspdk_blobfs_bdev.so 00:04:05.000 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:05.000 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:05.000 SYMLINK libspdk_bdev_error.so 00:04:05.000 LIB libspdk_bdev_gpt.a 00:04:05.000 LIB libspdk_bdev_delay.a 00:04:05.000 SO libspdk_bdev_gpt.so.6.0 00:04:05.000 SO libspdk_bdev_delay.so.6.0 00:04:05.000 LIB libspdk_bdev_null.a 00:04:05.000 SO libspdk_bdev_null.so.6.0 00:04:05.000 SYMLINK libspdk_bdev_gpt.so 00:04:05.000 LIB libspdk_bdev_lvol.a 00:04:05.000 LIB libspdk_bdev_malloc.a 00:04:05.000 SYMLINK libspdk_bdev_delay.so 00:04:05.259 CC module/bdev/passthru/vbdev_passthru.o 00:04:05.259 SO libspdk_bdev_lvol.so.6.0 00:04:05.259 SO libspdk_bdev_malloc.so.6.0 00:04:05.259 SYMLINK libspdk_bdev_null.so 00:04:05.259 CC module/bdev/split/vbdev_split.o 00:04:05.259 SYMLINK libspdk_bdev_malloc.so 00:04:05.259 CC module/bdev/raid/bdev_raid.o 00:04:05.259 CC module/bdev/split/vbdev_split_rpc.o 00:04:05.259 SYMLINK libspdk_bdev_lvol.so 00:04:05.259 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:05.259 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:05.259 CC module/bdev/aio/bdev_aio.o 00:04:05.259 CC module/bdev/ftl/bdev_ftl.o 00:04:05.259 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:05.517 LIB libspdk_bdev_split.a 00:04:05.517 SO libspdk_bdev_split.so.6.0 00:04:05.517 LIB libspdk_bdev_passthru.a 00:04:05.517 SO libspdk_bdev_passthru.so.6.0 00:04:05.517 CC module/bdev/iscsi/bdev_iscsi.o 00:04:05.517 SYMLINK libspdk_bdev_split.so 00:04:05.517 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:05.517 SYMLINK libspdk_bdev_passthru.so 00:04:05.517 CC module/bdev/raid/bdev_raid_rpc.o 00:04:05.517 CC module/bdev/raid/bdev_raid_sb.o 00:04:05.517 LIB libspdk_bdev_ftl.a 00:04:05.517 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:05.517 CC module/bdev/raid/raid0.o 00:04:05.517 CC module/bdev/aio/bdev_aio_rpc.o 00:04:05.517 SO libspdk_bdev_ftl.so.6.0 00:04:05.517 CC module/bdev/nvme/nvme_rpc.o 00:04:05.776 SYMLINK libspdk_bdev_ftl.so 00:04:05.776 CC module/bdev/nvme/bdev_mdns_client.o 00:04:05.776 LIB libspdk_bdev_zone_block.a 00:04:05.776 LIB libspdk_bdev_aio.a 00:04:05.776 SO libspdk_bdev_zone_block.so.6.0 00:04:05.776 SO libspdk_bdev_aio.so.6.0 00:04:05.776 CC module/bdev/raid/raid1.o 00:04:05.776 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:05.776 LIB libspdk_bdev_iscsi.a 00:04:05.776 SYMLINK libspdk_bdev_aio.so 00:04:05.776 CC module/bdev/raid/concat.o 00:04:05.776 CC module/bdev/raid/raid5f.o 00:04:05.776 SYMLINK libspdk_bdev_zone_block.so 00:04:05.776 CC module/bdev/nvme/vbdev_opal.o 00:04:05.776 SO libspdk_bdev_iscsi.so.6.0 00:04:05.776 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:05.776 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:06.035 SYMLINK libspdk_bdev_iscsi.so 00:04:06.035 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:06.035 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:06.294 LIB libspdk_bdev_raid.a 00:04:06.294 LIB libspdk_bdev_virtio.a 00:04:06.294 SO libspdk_bdev_raid.so.6.0 00:04:06.294 SO libspdk_bdev_virtio.so.6.0 00:04:06.553 SYMLINK libspdk_bdev_virtio.so 00:04:06.553 SYMLINK libspdk_bdev_raid.so 00:04:07.121 LIB libspdk_bdev_nvme.a 00:04:07.380 SO libspdk_bdev_nvme.so.7.0 00:04:07.380 SYMLINK libspdk_bdev_nvme.so 00:04:07.948 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:07.948 CC module/event/subsystems/sock/sock.o 00:04:07.948 CC module/event/subsystems/iobuf/iobuf.o 00:04:07.948 CC module/event/subsystems/vmd/vmd.o 00:04:07.948 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:07.948 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:07.948 CC module/event/subsystems/scheduler/scheduler.o 00:04:07.948 CC module/event/subsystems/fsdev/fsdev.o 00:04:07.948 CC module/event/subsystems/keyring/keyring.o 00:04:08.207 LIB libspdk_event_vhost_blk.a 00:04:08.207 LIB libspdk_event_sock.a 00:04:08.207 LIB libspdk_event_fsdev.a 00:04:08.207 LIB libspdk_event_keyring.a 00:04:08.207 LIB libspdk_event_vmd.a 00:04:08.207 LIB libspdk_event_iobuf.a 00:04:08.207 SO libspdk_event_vhost_blk.so.3.0 00:04:08.207 LIB libspdk_event_scheduler.a 00:04:08.207 SO libspdk_event_sock.so.5.0 00:04:08.207 SO libspdk_event_fsdev.so.1.0 00:04:08.207 SO libspdk_event_keyring.so.1.0 00:04:08.207 SO libspdk_event_vmd.so.6.0 00:04:08.207 SO libspdk_event_scheduler.so.4.0 00:04:08.207 SO libspdk_event_iobuf.so.3.0 00:04:08.207 SYMLINK libspdk_event_sock.so 00:04:08.207 SYMLINK libspdk_event_vhost_blk.so 00:04:08.207 SYMLINK libspdk_event_keyring.so 00:04:08.207 SYMLINK libspdk_event_fsdev.so 00:04:08.207 SYMLINK libspdk_event_scheduler.so 00:04:08.207 SYMLINK libspdk_event_vmd.so 00:04:08.207 SYMLINK libspdk_event_iobuf.so 00:04:08.466 CC module/event/subsystems/accel/accel.o 00:04:08.726 LIB libspdk_event_accel.a 00:04:08.726 SO libspdk_event_accel.so.6.0 00:04:08.985 SYMLINK libspdk_event_accel.so 00:04:09.246 CC module/event/subsystems/bdev/bdev.o 00:04:09.506 LIB libspdk_event_bdev.a 00:04:09.506 SO libspdk_event_bdev.so.6.0 00:04:09.506 SYMLINK libspdk_event_bdev.so 00:04:09.767 CC module/event/subsystems/ublk/ublk.o 00:04:10.028 CC module/event/subsystems/nbd/nbd.o 00:04:10.028 CC module/event/subsystems/scsi/scsi.o 00:04:10.028 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:10.028 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:10.028 LIB libspdk_event_ublk.a 00:04:10.028 LIB libspdk_event_nbd.a 00:04:10.028 SO libspdk_event_ublk.so.3.0 00:04:10.028 LIB libspdk_event_scsi.a 00:04:10.028 SO libspdk_event_nbd.so.6.0 00:04:10.028 SO libspdk_event_scsi.so.6.0 00:04:10.028 SYMLINK libspdk_event_nbd.so 00:04:10.028 SYMLINK libspdk_event_ublk.so 00:04:10.028 SYMLINK libspdk_event_scsi.so 00:04:10.289 LIB libspdk_event_nvmf.a 00:04:10.289 SO libspdk_event_nvmf.so.6.0 00:04:10.289 SYMLINK libspdk_event_nvmf.so 00:04:10.549 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:10.549 CC module/event/subsystems/iscsi/iscsi.o 00:04:10.810 LIB libspdk_event_vhost_scsi.a 00:04:10.810 LIB libspdk_event_iscsi.a 00:04:10.810 SO libspdk_event_vhost_scsi.so.3.0 00:04:10.810 SO libspdk_event_iscsi.so.6.0 00:04:10.810 SYMLINK libspdk_event_vhost_scsi.so 00:04:10.810 SYMLINK libspdk_event_iscsi.so 00:04:11.071 SO libspdk.so.6.0 00:04:11.071 SYMLINK libspdk.so 00:04:11.332 CC app/trace_record/trace_record.o 00:04:11.332 TEST_HEADER include/spdk/accel.h 00:04:11.332 CXX app/trace/trace.o 00:04:11.332 TEST_HEADER include/spdk/accel_module.h 00:04:11.332 TEST_HEADER include/spdk/assert.h 00:04:11.332 TEST_HEADER include/spdk/barrier.h 00:04:11.332 TEST_HEADER include/spdk/base64.h 00:04:11.332 TEST_HEADER include/spdk/bdev.h 00:04:11.332 TEST_HEADER include/spdk/bdev_module.h 00:04:11.332 TEST_HEADER include/spdk/bdev_zone.h 00:04:11.332 TEST_HEADER include/spdk/bit_array.h 00:04:11.332 TEST_HEADER include/spdk/bit_pool.h 00:04:11.332 TEST_HEADER include/spdk/blob_bdev.h 00:04:11.332 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:11.332 TEST_HEADER include/spdk/blobfs.h 00:04:11.332 TEST_HEADER include/spdk/blob.h 00:04:11.332 TEST_HEADER include/spdk/conf.h 00:04:11.333 TEST_HEADER include/spdk/config.h 00:04:11.333 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:11.333 TEST_HEADER include/spdk/cpuset.h 00:04:11.333 TEST_HEADER include/spdk/crc16.h 00:04:11.333 TEST_HEADER include/spdk/crc32.h 00:04:11.333 TEST_HEADER include/spdk/crc64.h 00:04:11.333 TEST_HEADER include/spdk/dif.h 00:04:11.333 TEST_HEADER include/spdk/dma.h 00:04:11.333 TEST_HEADER include/spdk/endian.h 00:04:11.333 TEST_HEADER include/spdk/env_dpdk.h 00:04:11.333 TEST_HEADER include/spdk/env.h 00:04:11.333 TEST_HEADER include/spdk/event.h 00:04:11.333 TEST_HEADER include/spdk/fd_group.h 00:04:11.333 TEST_HEADER include/spdk/fd.h 00:04:11.333 TEST_HEADER include/spdk/file.h 00:04:11.333 TEST_HEADER include/spdk/fsdev.h 00:04:11.333 TEST_HEADER include/spdk/fsdev_module.h 00:04:11.333 TEST_HEADER include/spdk/ftl.h 00:04:11.333 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:11.333 TEST_HEADER include/spdk/gpt_spec.h 00:04:11.333 TEST_HEADER include/spdk/hexlify.h 00:04:11.333 TEST_HEADER include/spdk/histogram_data.h 00:04:11.333 CC test/thread/poller_perf/poller_perf.o 00:04:11.333 TEST_HEADER include/spdk/idxd.h 00:04:11.333 TEST_HEADER include/spdk/idxd_spec.h 00:04:11.333 CC examples/ioat/perf/perf.o 00:04:11.333 CC examples/util/zipf/zipf.o 00:04:11.333 TEST_HEADER include/spdk/init.h 00:04:11.333 TEST_HEADER include/spdk/ioat.h 00:04:11.333 TEST_HEADER include/spdk/ioat_spec.h 00:04:11.333 TEST_HEADER include/spdk/iscsi_spec.h 00:04:11.333 TEST_HEADER include/spdk/json.h 00:04:11.333 TEST_HEADER include/spdk/jsonrpc.h 00:04:11.333 TEST_HEADER include/spdk/keyring.h 00:04:11.333 TEST_HEADER include/spdk/keyring_module.h 00:04:11.333 TEST_HEADER include/spdk/likely.h 00:04:11.333 TEST_HEADER include/spdk/log.h 00:04:11.333 TEST_HEADER include/spdk/lvol.h 00:04:11.333 TEST_HEADER include/spdk/md5.h 00:04:11.333 TEST_HEADER include/spdk/memory.h 00:04:11.333 TEST_HEADER include/spdk/mmio.h 00:04:11.333 TEST_HEADER include/spdk/nbd.h 00:04:11.333 TEST_HEADER include/spdk/net.h 00:04:11.333 TEST_HEADER include/spdk/notify.h 00:04:11.333 TEST_HEADER include/spdk/nvme.h 00:04:11.333 TEST_HEADER include/spdk/nvme_intel.h 00:04:11.333 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:11.333 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:11.333 TEST_HEADER include/spdk/nvme_spec.h 00:04:11.333 TEST_HEADER include/spdk/nvme_zns.h 00:04:11.594 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:11.594 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:11.594 CC test/dma/test_dma/test_dma.o 00:04:11.594 TEST_HEADER include/spdk/nvmf.h 00:04:11.594 CC test/app/bdev_svc/bdev_svc.o 00:04:11.594 TEST_HEADER include/spdk/nvmf_spec.h 00:04:11.594 TEST_HEADER include/spdk/nvmf_transport.h 00:04:11.594 TEST_HEADER include/spdk/opal.h 00:04:11.594 TEST_HEADER include/spdk/opal_spec.h 00:04:11.594 TEST_HEADER include/spdk/pci_ids.h 00:04:11.594 TEST_HEADER include/spdk/pipe.h 00:04:11.594 TEST_HEADER include/spdk/queue.h 00:04:11.594 TEST_HEADER include/spdk/reduce.h 00:04:11.594 TEST_HEADER include/spdk/rpc.h 00:04:11.594 TEST_HEADER include/spdk/scheduler.h 00:04:11.594 TEST_HEADER include/spdk/scsi.h 00:04:11.594 TEST_HEADER include/spdk/scsi_spec.h 00:04:11.594 CC test/env/mem_callbacks/mem_callbacks.o 00:04:11.594 TEST_HEADER include/spdk/sock.h 00:04:11.594 TEST_HEADER include/spdk/stdinc.h 00:04:11.594 TEST_HEADER include/spdk/string.h 00:04:11.594 TEST_HEADER include/spdk/thread.h 00:04:11.594 TEST_HEADER include/spdk/trace.h 00:04:11.594 TEST_HEADER include/spdk/trace_parser.h 00:04:11.594 TEST_HEADER include/spdk/tree.h 00:04:11.594 TEST_HEADER include/spdk/ublk.h 00:04:11.594 TEST_HEADER include/spdk/util.h 00:04:11.594 TEST_HEADER include/spdk/uuid.h 00:04:11.594 TEST_HEADER include/spdk/version.h 00:04:11.594 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:11.594 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:11.594 TEST_HEADER include/spdk/vhost.h 00:04:11.594 TEST_HEADER include/spdk/vmd.h 00:04:11.594 TEST_HEADER include/spdk/xor.h 00:04:11.594 TEST_HEADER include/spdk/zipf.h 00:04:11.594 CXX test/cpp_headers/accel.o 00:04:11.594 LINK poller_perf 00:04:11.594 LINK interrupt_tgt 00:04:11.594 LINK spdk_trace_record 00:04:11.594 LINK zipf 00:04:11.594 LINK ioat_perf 00:04:11.594 LINK bdev_svc 00:04:11.594 CXX test/cpp_headers/accel_module.o 00:04:11.594 LINK mem_callbacks 00:04:11.594 LINK spdk_trace 00:04:11.855 CC test/rpc_client/rpc_client_test.o 00:04:11.855 CXX test/cpp_headers/assert.o 00:04:11.855 CC app/nvmf_tgt/nvmf_main.o 00:04:11.855 CC test/event/event_perf/event_perf.o 00:04:11.855 CC examples/ioat/verify/verify.o 00:04:11.855 CXX test/cpp_headers/barrier.o 00:04:11.855 CC app/iscsi_tgt/iscsi_tgt.o 00:04:11.855 CC test/env/vtophys/vtophys.o 00:04:11.855 LINK test_dma 00:04:11.855 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:11.855 LINK event_perf 00:04:11.855 LINK rpc_client_test 00:04:11.855 LINK nvmf_tgt 00:04:12.116 CXX test/cpp_headers/base64.o 00:04:12.116 LINK vtophys 00:04:12.116 LINK verify 00:04:12.116 LINK iscsi_tgt 00:04:12.116 CC app/spdk_tgt/spdk_tgt.o 00:04:12.116 CXX test/cpp_headers/bdev.o 00:04:12.116 CC test/event/reactor/reactor.o 00:04:12.116 CC app/spdk_lspci/spdk_lspci.o 00:04:12.116 CC app/spdk_nvme_perf/perf.o 00:04:12.116 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:12.375 LINK spdk_tgt 00:04:12.375 CXX test/cpp_headers/bdev_module.o 00:04:12.375 LINK reactor 00:04:12.375 LINK spdk_lspci 00:04:12.375 CC examples/sock/hello_world/hello_sock.o 00:04:12.375 CC examples/thread/thread/thread_ex.o 00:04:12.375 LINK env_dpdk_post_init 00:04:12.375 LINK nvme_fuzz 00:04:12.375 CC test/accel/dif/dif.o 00:04:12.375 CXX test/cpp_headers/bdev_zone.o 00:04:12.375 CC test/event/reactor_perf/reactor_perf.o 00:04:12.635 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:12.635 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:12.635 LINK hello_sock 00:04:12.635 LINK thread 00:04:12.635 CC test/env/memory/memory_ut.o 00:04:12.635 CXX test/cpp_headers/bit_array.o 00:04:12.635 LINK reactor_perf 00:04:12.635 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:12.635 CC test/blobfs/mkfs/mkfs.o 00:04:12.635 CXX test/cpp_headers/bit_pool.o 00:04:12.899 CXX test/cpp_headers/blob_bdev.o 00:04:12.899 CC test/event/app_repeat/app_repeat.o 00:04:12.899 LINK mkfs 00:04:12.899 CC examples/vmd/lsvmd/lsvmd.o 00:04:12.899 CC examples/idxd/perf/perf.o 00:04:12.899 CXX test/cpp_headers/blobfs_bdev.o 00:04:12.899 LINK app_repeat 00:04:13.171 LINK lsvmd 00:04:13.171 LINK spdk_nvme_perf 00:04:13.171 LINK vhost_fuzz 00:04:13.171 CC examples/vmd/led/led.o 00:04:13.171 LINK dif 00:04:13.171 CXX test/cpp_headers/blobfs.o 00:04:13.171 CC test/event/scheduler/scheduler.o 00:04:13.171 LINK idxd_perf 00:04:13.171 LINK led 00:04:13.171 CC app/spdk_nvme_identify/identify.o 00:04:13.442 CXX test/cpp_headers/blob.o 00:04:13.442 CC app/spdk_nvme_discover/discovery_aer.o 00:04:13.442 LINK memory_ut 00:04:13.442 CC test/lvol/esnap/esnap.o 00:04:13.442 CC test/app/histogram_perf/histogram_perf.o 00:04:13.442 CXX test/cpp_headers/conf.o 00:04:13.442 LINK scheduler 00:04:13.442 CC test/app/jsoncat/jsoncat.o 00:04:13.443 LINK spdk_nvme_discover 00:04:13.443 LINK histogram_perf 00:04:13.702 CXX test/cpp_headers/config.o 00:04:13.702 CXX test/cpp_headers/cpuset.o 00:04:13.702 LINK jsoncat 00:04:13.702 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:13.702 CC test/env/pci/pci_ut.o 00:04:13.702 CXX test/cpp_headers/crc16.o 00:04:13.702 CC app/spdk_top/spdk_top.o 00:04:13.702 CXX test/cpp_headers/crc32.o 00:04:13.961 CC test/nvme/aer/aer.o 00:04:13.961 CC test/bdev/bdevio/bdevio.o 00:04:13.961 LINK hello_fsdev 00:04:13.961 CXX test/cpp_headers/crc64.o 00:04:13.961 CC test/nvme/reset/reset.o 00:04:13.961 CXX test/cpp_headers/dif.o 00:04:14.220 LINK pci_ut 00:04:14.220 LINK aer 00:04:14.220 LINK spdk_nvme_identify 00:04:14.220 LINK reset 00:04:14.220 CXX test/cpp_headers/dma.o 00:04:14.220 LINK bdevio 00:04:14.479 LINK iscsi_fuzz 00:04:14.479 CC examples/accel/perf/accel_perf.o 00:04:14.479 CXX test/cpp_headers/endian.o 00:04:14.479 CC test/nvme/sgl/sgl.o 00:04:14.479 CC test/nvme/e2edp/nvme_dp.o 00:04:14.479 CC test/nvme/overhead/overhead.o 00:04:14.479 CC test/nvme/err_injection/err_injection.o 00:04:14.479 CXX test/cpp_headers/env_dpdk.o 00:04:14.739 CC test/nvme/startup/startup.o 00:04:14.739 CC test/app/stub/stub.o 00:04:14.739 LINK sgl 00:04:14.739 LINK err_injection 00:04:14.739 CXX test/cpp_headers/env.o 00:04:14.739 LINK nvme_dp 00:04:14.739 LINK spdk_top 00:04:14.739 LINK overhead 00:04:14.739 LINK startup 00:04:14.739 LINK stub 00:04:14.739 CXX test/cpp_headers/event.o 00:04:14.739 CXX test/cpp_headers/fd_group.o 00:04:14.999 LINK accel_perf 00:04:14.999 CC app/vhost/vhost.o 00:04:14.999 CC test/nvme/reserve/reserve.o 00:04:14.999 CXX test/cpp_headers/fd.o 00:04:14.999 CC test/nvme/simple_copy/simple_copy.o 00:04:14.999 CXX test/cpp_headers/file.o 00:04:14.999 CC test/nvme/connect_stress/connect_stress.o 00:04:14.999 CC app/spdk_dd/spdk_dd.o 00:04:15.258 CC app/fio/nvme/fio_plugin.o 00:04:15.258 LINK vhost 00:04:15.258 CXX test/cpp_headers/fsdev.o 00:04:15.258 LINK reserve 00:04:15.258 LINK connect_stress 00:04:15.258 CC examples/blob/hello_world/hello_blob.o 00:04:15.258 LINK simple_copy 00:04:15.258 CC examples/blob/cli/blobcli.o 00:04:15.258 CXX test/cpp_headers/fsdev_module.o 00:04:15.258 CXX test/cpp_headers/ftl.o 00:04:15.517 LINK spdk_dd 00:04:15.517 CC test/nvme/boot_partition/boot_partition.o 00:04:15.517 LINK hello_blob 00:04:15.517 CC test/nvme/compliance/nvme_compliance.o 00:04:15.517 CC test/nvme/fused_ordering/fused_ordering.o 00:04:15.517 CXX test/cpp_headers/fuse_dispatcher.o 00:04:15.517 CC examples/nvme/hello_world/hello_world.o 00:04:15.517 LINK boot_partition 00:04:15.517 CXX test/cpp_headers/gpt_spec.o 00:04:15.776 LINK spdk_nvme 00:04:15.776 LINK fused_ordering 00:04:15.776 CC examples/nvme/reconnect/reconnect.o 00:04:15.776 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:15.776 LINK blobcli 00:04:15.776 LINK nvme_compliance 00:04:15.776 CXX test/cpp_headers/hexlify.o 00:04:15.776 LINK hello_world 00:04:15.776 CC examples/nvme/arbitration/arbitration.o 00:04:15.776 CC app/fio/bdev/fio_plugin.o 00:04:16.035 CXX test/cpp_headers/histogram_data.o 00:04:16.036 LINK reconnect 00:04:16.036 CC examples/nvme/hotplug/hotplug.o 00:04:16.036 CC examples/bdev/hello_world/hello_bdev.o 00:04:16.036 CC examples/bdev/bdevperf/bdevperf.o 00:04:16.036 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:16.036 CXX test/cpp_headers/idxd.o 00:04:16.295 LINK arbitration 00:04:16.295 CXX test/cpp_headers/idxd_spec.o 00:04:16.295 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:16.295 LINK doorbell_aers 00:04:16.295 LINK nvme_manage 00:04:16.295 LINK hello_bdev 00:04:16.295 LINK hotplug 00:04:16.295 LINK spdk_bdev 00:04:16.295 CXX test/cpp_headers/init.o 00:04:16.555 CC examples/nvme/abort/abort.o 00:04:16.555 LINK cmb_copy 00:04:16.555 CXX test/cpp_headers/ioat.o 00:04:16.555 CXX test/cpp_headers/ioat_spec.o 00:04:16.555 CC test/nvme/fdp/fdp.o 00:04:16.555 CC test/nvme/cuse/cuse.o 00:04:16.555 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:16.555 CXX test/cpp_headers/iscsi_spec.o 00:04:16.555 CXX test/cpp_headers/json.o 00:04:16.555 CXX test/cpp_headers/jsonrpc.o 00:04:16.555 CXX test/cpp_headers/keyring.o 00:04:16.814 LINK pmr_persistence 00:04:16.814 CXX test/cpp_headers/keyring_module.o 00:04:16.814 CXX test/cpp_headers/likely.o 00:04:16.814 CXX test/cpp_headers/log.o 00:04:16.814 CXX test/cpp_headers/lvol.o 00:04:16.814 LINK fdp 00:04:16.814 LINK abort 00:04:16.814 CXX test/cpp_headers/md5.o 00:04:16.814 CXX test/cpp_headers/memory.o 00:04:16.814 CXX test/cpp_headers/mmio.o 00:04:16.814 CXX test/cpp_headers/nbd.o 00:04:16.814 CXX test/cpp_headers/net.o 00:04:16.814 CXX test/cpp_headers/notify.o 00:04:16.814 LINK bdevperf 00:04:17.073 CXX test/cpp_headers/nvme.o 00:04:17.073 CXX test/cpp_headers/nvme_intel.o 00:04:17.073 CXX test/cpp_headers/nvme_ocssd.o 00:04:17.073 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:17.073 CXX test/cpp_headers/nvme_spec.o 00:04:17.073 CXX test/cpp_headers/nvme_zns.o 00:04:17.073 CXX test/cpp_headers/nvmf_cmd.o 00:04:17.073 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:17.073 CXX test/cpp_headers/nvmf.o 00:04:17.073 CXX test/cpp_headers/nvmf_spec.o 00:04:17.073 CXX test/cpp_headers/nvmf_transport.o 00:04:17.333 CXX test/cpp_headers/opal.o 00:04:17.333 CXX test/cpp_headers/opal_spec.o 00:04:17.333 CXX test/cpp_headers/pci_ids.o 00:04:17.333 CC examples/nvmf/nvmf/nvmf.o 00:04:17.333 CXX test/cpp_headers/pipe.o 00:04:17.333 CXX test/cpp_headers/queue.o 00:04:17.333 CXX test/cpp_headers/reduce.o 00:04:17.333 CXX test/cpp_headers/rpc.o 00:04:17.333 CXX test/cpp_headers/scheduler.o 00:04:17.333 CXX test/cpp_headers/scsi.o 00:04:17.333 CXX test/cpp_headers/scsi_spec.o 00:04:17.333 CXX test/cpp_headers/sock.o 00:04:17.591 CXX test/cpp_headers/stdinc.o 00:04:17.591 CXX test/cpp_headers/string.o 00:04:17.591 CXX test/cpp_headers/thread.o 00:04:17.591 CXX test/cpp_headers/trace.o 00:04:17.591 CXX test/cpp_headers/trace_parser.o 00:04:17.591 CXX test/cpp_headers/tree.o 00:04:17.591 CXX test/cpp_headers/ublk.o 00:04:17.591 CXX test/cpp_headers/util.o 00:04:17.591 LINK nvmf 00:04:17.591 CXX test/cpp_headers/uuid.o 00:04:17.591 CXX test/cpp_headers/version.o 00:04:17.591 CXX test/cpp_headers/vfio_user_pci.o 00:04:17.591 CXX test/cpp_headers/vfio_user_spec.o 00:04:17.591 CXX test/cpp_headers/vhost.o 00:04:17.591 CXX test/cpp_headers/vmd.o 00:04:17.849 CXX test/cpp_headers/xor.o 00:04:17.849 CXX test/cpp_headers/zipf.o 00:04:17.849 LINK cuse 00:04:18.782 LINK esnap 00:04:19.351 00:04:19.351 real 1m13.538s 00:04:19.351 user 5m36.939s 00:04:19.351 sys 1m7.375s 00:04:19.351 11:47:16 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:19.351 11:47:16 make -- common/autotest_common.sh@10 -- $ set +x 00:04:19.351 ************************************ 00:04:19.351 END TEST make 00:04:19.351 ************************************ 00:04:19.351 11:47:16 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:19.351 11:47:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:19.351 11:47:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:19.351 11:47:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:19.351 11:47:16 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:19.351 11:47:16 -- pm/common@44 -- $ pid=6192 00:04:19.351 11:47:16 -- pm/common@50 -- $ kill -TERM 6192 00:04:19.351 11:47:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:19.351 11:47:16 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:19.351 11:47:16 -- pm/common@44 -- $ pid=6194 00:04:19.351 11:47:16 -- pm/common@50 -- $ kill -TERM 6194 00:04:19.351 11:47:17 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:19.351 11:47:17 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:19.351 11:47:17 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:19.610 11:47:17 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:19.611 11:47:17 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.611 11:47:17 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.611 11:47:17 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.611 11:47:17 -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.611 11:47:17 -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.611 11:47:17 -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.611 11:47:17 -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.611 11:47:17 -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.611 11:47:17 -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.611 11:47:17 -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.611 11:47:17 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.611 11:47:17 -- scripts/common.sh@344 -- # case "$op" in 00:04:19.611 11:47:17 -- scripts/common.sh@345 -- # : 1 00:04:19.611 11:47:17 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.611 11:47:17 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.611 11:47:17 -- scripts/common.sh@365 -- # decimal 1 00:04:19.611 11:47:17 -- scripts/common.sh@353 -- # local d=1 00:04:19.611 11:47:17 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.611 11:47:17 -- scripts/common.sh@355 -- # echo 1 00:04:19.611 11:47:17 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.611 11:47:17 -- scripts/common.sh@366 -- # decimal 2 00:04:19.611 11:47:17 -- scripts/common.sh@353 -- # local d=2 00:04:19.611 11:47:17 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.611 11:47:17 -- scripts/common.sh@355 -- # echo 2 00:04:19.611 11:47:17 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.611 11:47:17 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.611 11:47:17 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.611 11:47:17 -- scripts/common.sh@368 -- # return 0 00:04:19.611 11:47:17 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.611 11:47:17 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:19.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.611 --rc genhtml_branch_coverage=1 00:04:19.611 --rc genhtml_function_coverage=1 00:04:19.611 --rc genhtml_legend=1 00:04:19.611 --rc geninfo_all_blocks=1 00:04:19.611 --rc geninfo_unexecuted_blocks=1 00:04:19.611 00:04:19.611 ' 00:04:19.611 11:47:17 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:19.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.611 --rc genhtml_branch_coverage=1 00:04:19.611 --rc genhtml_function_coverage=1 00:04:19.611 --rc genhtml_legend=1 00:04:19.611 --rc geninfo_all_blocks=1 00:04:19.611 --rc geninfo_unexecuted_blocks=1 00:04:19.611 00:04:19.611 ' 00:04:19.611 11:47:17 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:19.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.611 --rc genhtml_branch_coverage=1 00:04:19.611 --rc genhtml_function_coverage=1 00:04:19.611 --rc genhtml_legend=1 00:04:19.611 --rc geninfo_all_blocks=1 00:04:19.611 --rc geninfo_unexecuted_blocks=1 00:04:19.611 00:04:19.611 ' 00:04:19.611 11:47:17 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:19.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.611 --rc genhtml_branch_coverage=1 00:04:19.611 --rc genhtml_function_coverage=1 00:04:19.611 --rc genhtml_legend=1 00:04:19.611 --rc geninfo_all_blocks=1 00:04:19.611 --rc geninfo_unexecuted_blocks=1 00:04:19.611 00:04:19.611 ' 00:04:19.611 11:47:17 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:19.611 11:47:17 -- nvmf/common.sh@7 -- # uname -s 00:04:19.611 11:47:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:19.611 11:47:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:19.611 11:47:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:19.611 11:47:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:19.611 11:47:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:19.611 11:47:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:19.611 11:47:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:19.611 11:47:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:19.611 11:47:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:19.611 11:47:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:19.611 11:47:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a676ec9a-0fd5-4c76-8bc0-12dbed7f747a 00:04:19.611 11:47:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=a676ec9a-0fd5-4c76-8bc0-12dbed7f747a 00:04:19.611 11:47:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:19.611 11:47:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:19.611 11:47:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:19.611 11:47:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:19.611 11:47:17 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:19.611 11:47:17 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:19.611 11:47:17 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:19.611 11:47:17 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:19.611 11:47:17 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:19.611 11:47:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.611 11:47:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.611 11:47:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.611 11:47:17 -- paths/export.sh@5 -- # export PATH 00:04:19.611 11:47:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.611 11:47:17 -- nvmf/common.sh@51 -- # : 0 00:04:19.611 11:47:17 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:19.611 11:47:17 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:19.611 11:47:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:19.611 11:47:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:19.611 11:47:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:19.611 11:47:17 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:19.611 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:19.611 11:47:17 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:19.611 11:47:17 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:19.611 11:47:17 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:19.611 11:47:17 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:19.611 11:47:17 -- spdk/autotest.sh@32 -- # uname -s 00:04:19.611 11:47:17 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:19.611 11:47:17 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:19.611 11:47:17 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:19.611 11:47:17 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:19.611 11:47:17 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:19.611 11:47:17 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:19.611 11:47:17 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:19.611 11:47:17 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:19.611 11:47:17 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:19.611 11:47:17 -- spdk/autotest.sh@48 -- # udevadm_pid=66448 00:04:19.611 11:47:17 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:19.611 11:47:17 -- pm/common@17 -- # local monitor 00:04:19.611 11:47:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:19.611 11:47:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:19.611 11:47:17 -- pm/common@25 -- # sleep 1 00:04:19.611 11:47:17 -- pm/common@21 -- # date +%s 00:04:19.611 11:47:17 -- pm/common@21 -- # date +%s 00:04:19.611 11:47:17 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733485637 00:04:19.611 11:47:17 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733485637 00:04:19.611 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733485637_collect-cpu-load.pm.log 00:04:19.611 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733485637_collect-vmstat.pm.log 00:04:20.549 11:47:18 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:20.549 11:47:18 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:20.549 11:47:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:20.549 11:47:18 -- common/autotest_common.sh@10 -- # set +x 00:04:20.549 11:47:18 -- spdk/autotest.sh@59 -- # create_test_list 00:04:20.549 11:47:18 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:20.549 11:47:18 -- common/autotest_common.sh@10 -- # set +x 00:04:20.808 11:47:18 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:20.808 11:47:18 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:20.808 11:47:18 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:20.808 11:47:18 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:20.808 11:47:18 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:20.808 11:47:18 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:20.808 11:47:18 -- common/autotest_common.sh@1455 -- # uname 00:04:20.808 11:47:18 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:20.808 11:47:18 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:20.808 11:47:18 -- common/autotest_common.sh@1475 -- # uname 00:04:20.808 11:47:18 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:20.808 11:47:18 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:20.808 11:47:18 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:20.808 lcov: LCOV version 1.15 00:04:20.808 11:47:18 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:35.716 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:35.716 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:50.611 11:47:46 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:50.612 11:47:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:50.612 11:47:46 -- common/autotest_common.sh@10 -- # set +x 00:04:50.612 11:47:46 -- spdk/autotest.sh@78 -- # rm -f 00:04:50.612 11:47:46 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:50.612 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:50.612 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:50.612 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:50.612 11:47:46 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:50.612 11:47:46 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:50.612 11:47:46 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:50.612 11:47:46 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:50.612 11:47:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:50.612 11:47:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:50.612 11:47:46 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:50.612 11:47:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:50.612 11:47:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:50.612 11:47:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:50.612 11:47:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:50.612 11:47:46 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:50.612 11:47:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:50.612 11:47:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:50.612 11:47:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:50.612 11:47:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:04:50.612 11:47:46 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:04:50.612 11:47:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:50.612 11:47:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:50.612 11:47:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:50.612 11:47:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:04:50.612 11:47:46 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:04:50.612 11:47:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:50.612 11:47:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:50.612 11:47:46 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:50.612 11:47:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:50.612 11:47:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:50.612 11:47:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:50.612 11:47:46 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:50.612 11:47:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:50.612 No valid GPT data, bailing 00:04:50.612 11:47:47 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:50.612 11:47:47 -- scripts/common.sh@394 -- # pt= 00:04:50.612 11:47:47 -- scripts/common.sh@395 -- # return 1 00:04:50.612 11:47:47 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:50.612 1+0 records in 00:04:50.612 1+0 records out 00:04:50.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0049603 s, 211 MB/s 00:04:50.612 11:47:47 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:50.612 11:47:47 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:50.612 11:47:47 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:50.612 11:47:47 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:50.612 11:47:47 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:50.612 No valid GPT data, bailing 00:04:50.612 11:47:47 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:50.612 11:47:47 -- scripts/common.sh@394 -- # pt= 00:04:50.612 11:47:47 -- scripts/common.sh@395 -- # return 1 00:04:50.612 11:47:47 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:50.612 1+0 records in 00:04:50.612 1+0 records out 00:04:50.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0062148 s, 169 MB/s 00:04:50.612 11:47:47 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:50.612 11:47:47 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:50.612 11:47:47 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:50.612 11:47:47 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:50.612 11:47:47 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:50.612 No valid GPT data, bailing 00:04:50.612 11:47:47 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:50.612 11:47:47 -- scripts/common.sh@394 -- # pt= 00:04:50.612 11:47:47 -- scripts/common.sh@395 -- # return 1 00:04:50.612 11:47:47 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:50.612 1+0 records in 00:04:50.612 1+0 records out 00:04:50.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00649034 s, 162 MB/s 00:04:50.612 11:47:47 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:50.612 11:47:47 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:50.612 11:47:47 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:50.612 11:47:47 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:50.612 11:47:47 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:50.612 No valid GPT data, bailing 00:04:50.612 11:47:47 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:50.612 11:47:47 -- scripts/common.sh@394 -- # pt= 00:04:50.612 11:47:47 -- scripts/common.sh@395 -- # return 1 00:04:50.612 11:47:47 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:50.612 1+0 records in 00:04:50.612 1+0 records out 00:04:50.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00640867 s, 164 MB/s 00:04:50.612 11:47:47 -- spdk/autotest.sh@105 -- # sync 00:04:50.612 11:47:47 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:50.612 11:47:47 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:50.612 11:47:47 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:53.189 11:47:50 -- spdk/autotest.sh@111 -- # uname -s 00:04:53.189 11:47:50 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:53.189 11:47:50 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:53.189 11:47:50 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:53.776 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:53.776 Hugepages 00:04:53.776 node hugesize free / total 00:04:53.776 node0 1048576kB 0 / 0 00:04:53.776 node0 2048kB 0 / 0 00:04:53.776 00:04:53.776 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:53.776 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:53.776 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:54.034 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:54.034 11:47:51 -- spdk/autotest.sh@117 -- # uname -s 00:04:54.034 11:47:51 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:54.034 11:47:51 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:54.034 11:47:51 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:54.970 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:54.970 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:54.970 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:54.970 11:47:52 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:55.910 11:47:53 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:55.910 11:47:53 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:55.910 11:47:53 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:55.910 11:47:53 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:55.910 11:47:53 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:55.910 11:47:53 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:55.910 11:47:53 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:55.910 11:47:53 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:55.910 11:47:53 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:56.170 11:47:53 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:56.170 11:47:53 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:56.170 11:47:53 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:56.738 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:56.738 Waiting for block devices as requested 00:04:56.738 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:56.738 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:56.738 11:47:54 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:56.738 11:47:54 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:56.738 11:47:54 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:56.738 11:47:54 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:56.997 11:47:54 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:56.997 11:47:54 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:56.997 11:47:54 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:56.997 11:47:54 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:56.997 11:47:54 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:56.997 11:47:54 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:56.997 11:47:54 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:56.997 11:47:54 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:56.997 11:47:54 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:56.997 11:47:54 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:56.997 11:47:54 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:56.997 11:47:54 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:56.997 11:47:54 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:56.997 11:47:54 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:56.997 11:47:54 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:56.997 11:47:54 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:56.997 11:47:54 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:56.997 11:47:54 -- common/autotest_common.sh@1541 -- # continue 00:04:56.997 11:47:54 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:56.997 11:47:54 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:56.997 11:47:54 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:56.997 11:47:54 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:56.997 11:47:54 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:56.997 11:47:54 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:56.997 11:47:54 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:56.997 11:47:54 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:56.997 11:47:54 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:56.997 11:47:54 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:56.997 11:47:54 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:56.997 11:47:54 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:56.997 11:47:54 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:56.997 11:47:54 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:56.997 11:47:54 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:56.997 11:47:54 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:56.997 11:47:54 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:56.997 11:47:54 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:56.997 11:47:54 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:56.997 11:47:54 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:56.997 11:47:54 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:56.997 11:47:54 -- common/autotest_common.sh@1541 -- # continue 00:04:56.997 11:47:54 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:56.997 11:47:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:56.997 11:47:54 -- common/autotest_common.sh@10 -- # set +x 00:04:56.997 11:47:54 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:56.997 11:47:54 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:56.997 11:47:54 -- common/autotest_common.sh@10 -- # set +x 00:04:56.997 11:47:54 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:57.938 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:57.938 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:57.938 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:57.938 11:47:55 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:57.938 11:47:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:57.938 11:47:55 -- common/autotest_common.sh@10 -- # set +x 00:04:58.197 11:47:55 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:58.197 11:47:55 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:58.197 11:47:55 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:58.197 11:47:55 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:58.197 11:47:55 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:58.197 11:47:55 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:58.197 11:47:55 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:58.197 11:47:55 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:58.197 11:47:55 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:58.197 11:47:55 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:58.197 11:47:55 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:58.197 11:47:55 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:58.197 11:47:55 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:58.197 11:47:55 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:58.197 11:47:55 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:58.197 11:47:55 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:58.197 11:47:55 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:58.197 11:47:55 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:58.197 11:47:55 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:58.197 11:47:55 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:58.197 11:47:55 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:58.197 11:47:55 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:58.197 11:47:55 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:58.197 11:47:55 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:58.197 11:47:55 -- common/autotest_common.sh@1570 -- # return 0 00:04:58.197 11:47:55 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:58.197 11:47:55 -- common/autotest_common.sh@1578 -- # return 0 00:04:58.197 11:47:55 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:58.197 11:47:55 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:58.197 11:47:55 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:58.197 11:47:55 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:58.197 11:47:55 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:58.197 11:47:55 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:58.197 11:47:55 -- common/autotest_common.sh@10 -- # set +x 00:04:58.197 11:47:55 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:58.197 11:47:55 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:58.197 11:47:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.197 11:47:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.197 11:47:55 -- common/autotest_common.sh@10 -- # set +x 00:04:58.197 ************************************ 00:04:58.197 START TEST env 00:04:58.197 ************************************ 00:04:58.197 11:47:55 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:58.197 * Looking for test storage... 00:04:58.197 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:58.197 11:47:55 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:58.457 11:47:55 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:58.457 11:47:55 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:58.457 11:47:56 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:58.457 11:47:56 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.457 11:47:56 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.457 11:47:56 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.457 11:47:56 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.457 11:47:56 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.457 11:47:56 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.457 11:47:56 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.457 11:47:56 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.457 11:47:56 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.457 11:47:56 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.457 11:47:56 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.457 11:47:56 env -- scripts/common.sh@344 -- # case "$op" in 00:04:58.457 11:47:56 env -- scripts/common.sh@345 -- # : 1 00:04:58.457 11:47:56 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.457 11:47:56 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.457 11:47:56 env -- scripts/common.sh@365 -- # decimal 1 00:04:58.457 11:47:56 env -- scripts/common.sh@353 -- # local d=1 00:04:58.457 11:47:56 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.457 11:47:56 env -- scripts/common.sh@355 -- # echo 1 00:04:58.457 11:47:56 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.457 11:47:56 env -- scripts/common.sh@366 -- # decimal 2 00:04:58.457 11:47:56 env -- scripts/common.sh@353 -- # local d=2 00:04:58.457 11:47:56 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.457 11:47:56 env -- scripts/common.sh@355 -- # echo 2 00:04:58.457 11:47:56 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.457 11:47:56 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.457 11:47:56 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.457 11:47:56 env -- scripts/common.sh@368 -- # return 0 00:04:58.457 11:47:56 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.457 11:47:56 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:58.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.457 --rc genhtml_branch_coverage=1 00:04:58.457 --rc genhtml_function_coverage=1 00:04:58.457 --rc genhtml_legend=1 00:04:58.457 --rc geninfo_all_blocks=1 00:04:58.457 --rc geninfo_unexecuted_blocks=1 00:04:58.457 00:04:58.457 ' 00:04:58.457 11:47:56 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:58.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.457 --rc genhtml_branch_coverage=1 00:04:58.457 --rc genhtml_function_coverage=1 00:04:58.457 --rc genhtml_legend=1 00:04:58.457 --rc geninfo_all_blocks=1 00:04:58.457 --rc geninfo_unexecuted_blocks=1 00:04:58.457 00:04:58.457 ' 00:04:58.457 11:47:56 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:58.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.457 --rc genhtml_branch_coverage=1 00:04:58.457 --rc genhtml_function_coverage=1 00:04:58.457 --rc genhtml_legend=1 00:04:58.457 --rc geninfo_all_blocks=1 00:04:58.457 --rc geninfo_unexecuted_blocks=1 00:04:58.457 00:04:58.457 ' 00:04:58.457 11:47:56 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:58.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.457 --rc genhtml_branch_coverage=1 00:04:58.457 --rc genhtml_function_coverage=1 00:04:58.457 --rc genhtml_legend=1 00:04:58.457 --rc geninfo_all_blocks=1 00:04:58.457 --rc geninfo_unexecuted_blocks=1 00:04:58.457 00:04:58.457 ' 00:04:58.457 11:47:56 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:58.457 11:47:56 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.457 11:47:56 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.457 11:47:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.457 ************************************ 00:04:58.457 START TEST env_memory 00:04:58.457 ************************************ 00:04:58.457 11:47:56 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:58.457 00:04:58.457 00:04:58.457 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.457 http://cunit.sourceforge.net/ 00:04:58.457 00:04:58.457 00:04:58.457 Suite: memory 00:04:58.457 Test: alloc and free memory map ...[2024-12-06 11:47:56.131109] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:58.457 passed 00:04:58.457 Test: mem map translation ...[2024-12-06 11:47:56.173850] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:58.457 [2024-12-06 11:47:56.173913] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:58.457 [2024-12-06 11:47:56.173975] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:58.457 [2024-12-06 11:47:56.173994] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:58.717 passed 00:04:58.717 Test: mem map registration ...[2024-12-06 11:47:56.239268] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:58.717 [2024-12-06 11:47:56.239334] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:58.717 passed 00:04:58.717 Test: mem map adjacent registrations ...passed 00:04:58.717 00:04:58.717 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.717 suites 1 1 n/a 0 0 00:04:58.717 tests 4 4 4 0 0 00:04:58.717 asserts 152 152 152 0 n/a 00:04:58.717 00:04:58.717 Elapsed time = 0.234 seconds 00:04:58.717 00:04:58.717 real 0m0.285s 00:04:58.717 user 0m0.248s 00:04:58.717 sys 0m0.027s 00:04:58.717 11:47:56 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.717 11:47:56 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:58.717 ************************************ 00:04:58.717 END TEST env_memory 00:04:58.717 ************************************ 00:04:58.717 11:47:56 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:58.717 11:47:56 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.717 11:47:56 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.717 11:47:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.717 ************************************ 00:04:58.717 START TEST env_vtophys 00:04:58.717 ************************************ 00:04:58.717 11:47:56 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:58.717 EAL: lib.eal log level changed from notice to debug 00:04:58.717 EAL: Detected lcore 0 as core 0 on socket 0 00:04:58.717 EAL: Detected lcore 1 as core 0 on socket 0 00:04:58.717 EAL: Detected lcore 2 as core 0 on socket 0 00:04:58.717 EAL: Detected lcore 3 as core 0 on socket 0 00:04:58.717 EAL: Detected lcore 4 as core 0 on socket 0 00:04:58.717 EAL: Detected lcore 5 as core 0 on socket 0 00:04:58.717 EAL: Detected lcore 6 as core 0 on socket 0 00:04:58.717 EAL: Detected lcore 7 as core 0 on socket 0 00:04:58.717 EAL: Detected lcore 8 as core 0 on socket 0 00:04:58.717 EAL: Detected lcore 9 as core 0 on socket 0 00:04:58.717 EAL: Maximum logical cores by configuration: 128 00:04:58.717 EAL: Detected CPU lcores: 10 00:04:58.717 EAL: Detected NUMA nodes: 1 00:04:58.717 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:04:58.717 EAL: Detected shared linkage of DPDK 00:04:58.717 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:04:58.717 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:04:58.717 EAL: Registered [vdev] bus. 00:04:58.717 EAL: bus.vdev log level changed from disabled to notice 00:04:58.717 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:04:58.717 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:04:58.717 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:58.717 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:58.717 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:04:58.717 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:04:58.717 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:04:58.717 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:04:58.717 EAL: No shared files mode enabled, IPC will be disabled 00:04:58.717 EAL: No shared files mode enabled, IPC is disabled 00:04:58.717 EAL: Selected IOVA mode 'PA' 00:04:58.717 EAL: Probing VFIO support... 00:04:58.717 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:58.717 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:58.717 EAL: Ask a virtual area of 0x2e000 bytes 00:04:58.717 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:58.717 EAL: Setting up physically contiguous memory... 00:04:58.717 EAL: Setting maximum number of open files to 524288 00:04:58.717 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:58.717 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:58.717 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.717 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:58.717 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:58.717 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.717 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:58.717 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:58.717 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.717 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:58.717 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:58.717 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.717 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:58.717 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:58.717 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.717 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:58.717 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:58.717 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.717 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:58.717 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:58.717 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.717 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:58.717 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:58.717 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.717 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:58.717 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:58.717 EAL: Hugepages will be freed exactly as allocated. 00:04:58.717 EAL: No shared files mode enabled, IPC is disabled 00:04:58.717 EAL: No shared files mode enabled, IPC is disabled 00:04:58.976 EAL: TSC frequency is ~2290000 KHz 00:04:58.976 EAL: Main lcore 0 is ready (tid=7f045fe1ba40;cpuset=[0]) 00:04:58.976 EAL: Trying to obtain current memory policy. 00:04:58.976 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.976 EAL: Restoring previous memory policy: 0 00:04:58.976 EAL: request: mp_malloc_sync 00:04:58.976 EAL: No shared files mode enabled, IPC is disabled 00:04:58.976 EAL: Heap on socket 0 was expanded by 2MB 00:04:58.976 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:58.976 EAL: No shared files mode enabled, IPC is disabled 00:04:58.976 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:58.976 EAL: Mem event callback 'spdk:(nil)' registered 00:04:58.976 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:58.976 00:04:58.976 00:04:58.976 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.976 http://cunit.sourceforge.net/ 00:04:58.976 00:04:58.976 00:04:58.976 Suite: components_suite 00:04:59.236 Test: vtophys_malloc_test ...passed 00:04:59.236 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:59.236 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.236 EAL: Restoring previous memory policy: 4 00:04:59.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.236 EAL: request: mp_malloc_sync 00:04:59.236 EAL: No shared files mode enabled, IPC is disabled 00:04:59.236 EAL: Heap on socket 0 was expanded by 4MB 00:04:59.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.236 EAL: request: mp_malloc_sync 00:04:59.236 EAL: No shared files mode enabled, IPC is disabled 00:04:59.236 EAL: Heap on socket 0 was shrunk by 4MB 00:04:59.236 EAL: Trying to obtain current memory policy. 00:04:59.236 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.236 EAL: Restoring previous memory policy: 4 00:04:59.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.236 EAL: request: mp_malloc_sync 00:04:59.236 EAL: No shared files mode enabled, IPC is disabled 00:04:59.236 EAL: Heap on socket 0 was expanded by 6MB 00:04:59.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.236 EAL: request: mp_malloc_sync 00:04:59.236 EAL: No shared files mode enabled, IPC is disabled 00:04:59.236 EAL: Heap on socket 0 was shrunk by 6MB 00:04:59.236 EAL: Trying to obtain current memory policy. 00:04:59.236 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.236 EAL: Restoring previous memory policy: 4 00:04:59.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.236 EAL: request: mp_malloc_sync 00:04:59.236 EAL: No shared files mode enabled, IPC is disabled 00:04:59.236 EAL: Heap on socket 0 was expanded by 10MB 00:04:59.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.236 EAL: request: mp_malloc_sync 00:04:59.236 EAL: No shared files mode enabled, IPC is disabled 00:04:59.236 EAL: Heap on socket 0 was shrunk by 10MB 00:04:59.236 EAL: Trying to obtain current memory policy. 00:04:59.236 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.236 EAL: Restoring previous memory policy: 4 00:04:59.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.236 EAL: request: mp_malloc_sync 00:04:59.236 EAL: No shared files mode enabled, IPC is disabled 00:04:59.236 EAL: Heap on socket 0 was expanded by 18MB 00:04:59.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.236 EAL: request: mp_malloc_sync 00:04:59.236 EAL: No shared files mode enabled, IPC is disabled 00:04:59.236 EAL: Heap on socket 0 was shrunk by 18MB 00:04:59.236 EAL: Trying to obtain current memory policy. 00:04:59.236 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.236 EAL: Restoring previous memory policy: 4 00:04:59.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.236 EAL: request: mp_malloc_sync 00:04:59.236 EAL: No shared files mode enabled, IPC is disabled 00:04:59.236 EAL: Heap on socket 0 was expanded by 34MB 00:04:59.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.236 EAL: request: mp_malloc_sync 00:04:59.236 EAL: No shared files mode enabled, IPC is disabled 00:04:59.236 EAL: Heap on socket 0 was shrunk by 34MB 00:04:59.236 EAL: Trying to obtain current memory policy. 00:04:59.236 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.236 EAL: Restoring previous memory policy: 4 00:04:59.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.236 EAL: request: mp_malloc_sync 00:04:59.236 EAL: No shared files mode enabled, IPC is disabled 00:04:59.236 EAL: Heap on socket 0 was expanded by 66MB 00:04:59.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.496 EAL: request: mp_malloc_sync 00:04:59.496 EAL: No shared files mode enabled, IPC is disabled 00:04:59.496 EAL: Heap on socket 0 was shrunk by 66MB 00:04:59.496 EAL: Trying to obtain current memory policy. 00:04:59.496 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.496 EAL: Restoring previous memory policy: 4 00:04:59.496 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.496 EAL: request: mp_malloc_sync 00:04:59.496 EAL: No shared files mode enabled, IPC is disabled 00:04:59.496 EAL: Heap on socket 0 was expanded by 130MB 00:04:59.496 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.496 EAL: request: mp_malloc_sync 00:04:59.496 EAL: No shared files mode enabled, IPC is disabled 00:04:59.496 EAL: Heap on socket 0 was shrunk by 130MB 00:04:59.496 EAL: Trying to obtain current memory policy. 00:04:59.496 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.496 EAL: Restoring previous memory policy: 4 00:04:59.496 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.496 EAL: request: mp_malloc_sync 00:04:59.496 EAL: No shared files mode enabled, IPC is disabled 00:04:59.496 EAL: Heap on socket 0 was expanded by 258MB 00:04:59.496 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.496 EAL: request: mp_malloc_sync 00:04:59.496 EAL: No shared files mode enabled, IPC is disabled 00:04:59.496 EAL: Heap on socket 0 was shrunk by 258MB 00:04:59.496 EAL: Trying to obtain current memory policy. 00:04:59.496 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.755 EAL: Restoring previous memory policy: 4 00:04:59.755 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.755 EAL: request: mp_malloc_sync 00:04:59.755 EAL: No shared files mode enabled, IPC is disabled 00:04:59.755 EAL: Heap on socket 0 was expanded by 514MB 00:04:59.755 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.755 EAL: request: mp_malloc_sync 00:04:59.755 EAL: No shared files mode enabled, IPC is disabled 00:04:59.755 EAL: Heap on socket 0 was shrunk by 514MB 00:04:59.755 EAL: Trying to obtain current memory policy. 00:04:59.755 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.015 EAL: Restoring previous memory policy: 4 00:05:00.015 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.015 EAL: request: mp_malloc_sync 00:05:00.015 EAL: No shared files mode enabled, IPC is disabled 00:05:00.015 EAL: Heap on socket 0 was expanded by 1026MB 00:05:00.274 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.274 passed 00:05:00.274 00:05:00.274 Run Summary: Type Total Ran Passed Failed Inactive 00:05:00.274 suites 1 1 n/a 0 0 00:05:00.274 tests 2 2 2 0 0 00:05:00.274 asserts 5281 5281 5281 0 n/a 00:05:00.274 00:05:00.274 Elapsed time = 1.358 seconds 00:05:00.274 EAL: request: mp_malloc_sync 00:05:00.274 EAL: No shared files mode enabled, IPC is disabled 00:05:00.274 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:00.274 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.274 EAL: request: mp_malloc_sync 00:05:00.274 EAL: No shared files mode enabled, IPC is disabled 00:05:00.274 EAL: Heap on socket 0 was shrunk by 2MB 00:05:00.274 EAL: No shared files mode enabled, IPC is disabled 00:05:00.274 EAL: No shared files mode enabled, IPC is disabled 00:05:00.274 EAL: No shared files mode enabled, IPC is disabled 00:05:00.274 00:05:00.274 real 0m1.596s 00:05:00.274 user 0m0.765s 00:05:00.274 sys 0m0.701s 00:05:00.274 11:47:58 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.274 11:47:58 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:00.274 ************************************ 00:05:00.274 END TEST env_vtophys 00:05:00.274 ************************************ 00:05:00.534 11:47:58 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:00.534 11:47:58 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.534 11:47:58 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.534 11:47:58 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.534 ************************************ 00:05:00.534 START TEST env_pci 00:05:00.534 ************************************ 00:05:00.534 11:47:58 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:00.534 00:05:00.534 00:05:00.534 CUnit - A unit testing framework for C - Version 2.1-3 00:05:00.534 http://cunit.sourceforge.net/ 00:05:00.534 00:05:00.534 00:05:00.534 Suite: pci 00:05:00.534 Test: pci_hook ...[2024-12-06 11:47:58.097268] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68692 has claimed it 00:05:00.534 passed 00:05:00.534 00:05:00.534 Run Summary: Type Total Ran Passed Failed Inactive 00:05:00.534 suites 1 1 n/a 0 0 00:05:00.534 tests 1 1 1 0 0 00:05:00.534 asserts 25 25 25 0 n/a 00:05:00.534 00:05:00.534 Elapsed time = 0.004 seconds 00:05:00.534 EAL: Cannot find device (10000:00:01.0) 00:05:00.534 EAL: Failed to attach device on primary process 00:05:00.534 00:05:00.534 real 0m0.076s 00:05:00.534 user 0m0.030s 00:05:00.534 sys 0m0.045s 00:05:00.534 11:47:58 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.534 11:47:58 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:00.534 ************************************ 00:05:00.534 END TEST env_pci 00:05:00.534 ************************************ 00:05:00.534 11:47:58 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:00.534 11:47:58 env -- env/env.sh@15 -- # uname 00:05:00.534 11:47:58 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:00.534 11:47:58 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:00.534 11:47:58 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:00.534 11:47:58 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:00.534 11:47:58 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.534 11:47:58 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.534 ************************************ 00:05:00.534 START TEST env_dpdk_post_init 00:05:00.534 ************************************ 00:05:00.534 11:47:58 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:00.534 EAL: Detected CPU lcores: 10 00:05:00.534 EAL: Detected NUMA nodes: 1 00:05:00.534 EAL: Detected shared linkage of DPDK 00:05:00.534 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:00.534 EAL: Selected IOVA mode 'PA' 00:05:00.794 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:00.794 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:00.794 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:00.794 Starting DPDK initialization... 00:05:00.794 Starting SPDK post initialization... 00:05:00.794 SPDK NVMe probe 00:05:00.794 Attaching to 0000:00:10.0 00:05:00.794 Attaching to 0000:00:11.0 00:05:00.794 Attached to 0000:00:10.0 00:05:00.794 Attached to 0000:00:11.0 00:05:00.794 Cleaning up... 00:05:00.794 00:05:00.794 real 0m0.235s 00:05:00.794 user 0m0.064s 00:05:00.794 sys 0m0.072s 00:05:00.794 11:47:58 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.794 11:47:58 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:00.794 ************************************ 00:05:00.794 END TEST env_dpdk_post_init 00:05:00.794 ************************************ 00:05:00.794 11:47:58 env -- env/env.sh@26 -- # uname 00:05:00.794 11:47:58 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:00.794 11:47:58 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:00.794 11:47:58 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.794 11:47:58 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.794 11:47:58 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.794 ************************************ 00:05:00.794 START TEST env_mem_callbacks 00:05:00.794 ************************************ 00:05:00.794 11:47:58 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:01.055 EAL: Detected CPU lcores: 10 00:05:01.055 EAL: Detected NUMA nodes: 1 00:05:01.055 EAL: Detected shared linkage of DPDK 00:05:01.055 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:01.055 EAL: Selected IOVA mode 'PA' 00:05:01.055 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:01.055 00:05:01.055 00:05:01.055 CUnit - A unit testing framework for C - Version 2.1-3 00:05:01.055 http://cunit.sourceforge.net/ 00:05:01.055 00:05:01.055 00:05:01.055 Suite: memory 00:05:01.055 Test: test ... 00:05:01.055 register 0x200000200000 2097152 00:05:01.055 malloc 3145728 00:05:01.055 register 0x200000400000 4194304 00:05:01.055 buf 0x200000500000 len 3145728 PASSED 00:05:01.055 malloc 64 00:05:01.055 buf 0x2000004fff40 len 64 PASSED 00:05:01.055 malloc 4194304 00:05:01.055 register 0x200000800000 6291456 00:05:01.055 buf 0x200000a00000 len 4194304 PASSED 00:05:01.055 free 0x200000500000 3145728 00:05:01.055 free 0x2000004fff40 64 00:05:01.055 unregister 0x200000400000 4194304 PASSED 00:05:01.055 free 0x200000a00000 4194304 00:05:01.055 unregister 0x200000800000 6291456 PASSED 00:05:01.055 malloc 8388608 00:05:01.055 register 0x200000400000 10485760 00:05:01.055 buf 0x200000600000 len 8388608 PASSED 00:05:01.055 free 0x200000600000 8388608 00:05:01.055 unregister 0x200000400000 10485760 PASSED 00:05:01.055 passed 00:05:01.055 00:05:01.055 Run Summary: Type Total Ran Passed Failed Inactive 00:05:01.055 suites 1 1 n/a 0 0 00:05:01.055 tests 1 1 1 0 0 00:05:01.055 asserts 15 15 15 0 n/a 00:05:01.055 00:05:01.055 Elapsed time = 0.011 seconds 00:05:01.055 00:05:01.055 real 0m0.180s 00:05:01.055 user 0m0.033s 00:05:01.055 sys 0m0.046s 00:05:01.055 11:47:58 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.055 11:47:58 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:01.055 ************************************ 00:05:01.055 END TEST env_mem_callbacks 00:05:01.055 ************************************ 00:05:01.055 00:05:01.055 real 0m2.941s 00:05:01.055 user 0m1.377s 00:05:01.055 sys 0m1.237s 00:05:01.055 11:47:58 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.055 11:47:58 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.055 ************************************ 00:05:01.055 END TEST env 00:05:01.055 ************************************ 00:05:01.316 11:47:58 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:01.316 11:47:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.316 11:47:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.316 11:47:58 -- common/autotest_common.sh@10 -- # set +x 00:05:01.316 ************************************ 00:05:01.316 START TEST rpc 00:05:01.316 ************************************ 00:05:01.316 11:47:58 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:01.316 * Looking for test storage... 00:05:01.316 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:01.316 11:47:58 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:01.316 11:47:58 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:01.316 11:47:58 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:01.316 11:47:59 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:01.316 11:47:59 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.316 11:47:59 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.316 11:47:59 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.316 11:47:59 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.316 11:47:59 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.316 11:47:59 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.316 11:47:59 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.316 11:47:59 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.316 11:47:59 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.316 11:47:59 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.316 11:47:59 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.316 11:47:59 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:01.316 11:47:59 rpc -- scripts/common.sh@345 -- # : 1 00:05:01.316 11:47:59 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.316 11:47:59 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.316 11:47:59 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:01.316 11:47:59 rpc -- scripts/common.sh@353 -- # local d=1 00:05:01.316 11:47:59 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.316 11:47:59 rpc -- scripts/common.sh@355 -- # echo 1 00:05:01.316 11:47:59 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.316 11:47:59 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:01.316 11:47:59 rpc -- scripts/common.sh@353 -- # local d=2 00:05:01.316 11:47:59 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.316 11:47:59 rpc -- scripts/common.sh@355 -- # echo 2 00:05:01.316 11:47:59 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.316 11:47:59 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.316 11:47:59 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.316 11:47:59 rpc -- scripts/common.sh@368 -- # return 0 00:05:01.316 11:47:59 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.316 11:47:59 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:01.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.316 --rc genhtml_branch_coverage=1 00:05:01.316 --rc genhtml_function_coverage=1 00:05:01.316 --rc genhtml_legend=1 00:05:01.316 --rc geninfo_all_blocks=1 00:05:01.316 --rc geninfo_unexecuted_blocks=1 00:05:01.316 00:05:01.316 ' 00:05:01.316 11:47:59 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:01.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.316 --rc genhtml_branch_coverage=1 00:05:01.316 --rc genhtml_function_coverage=1 00:05:01.316 --rc genhtml_legend=1 00:05:01.316 --rc geninfo_all_blocks=1 00:05:01.316 --rc geninfo_unexecuted_blocks=1 00:05:01.316 00:05:01.316 ' 00:05:01.316 11:47:59 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:01.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.316 --rc genhtml_branch_coverage=1 00:05:01.316 --rc genhtml_function_coverage=1 00:05:01.316 --rc genhtml_legend=1 00:05:01.316 --rc geninfo_all_blocks=1 00:05:01.316 --rc geninfo_unexecuted_blocks=1 00:05:01.316 00:05:01.316 ' 00:05:01.316 11:47:59 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:01.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.317 --rc genhtml_branch_coverage=1 00:05:01.317 --rc genhtml_function_coverage=1 00:05:01.317 --rc genhtml_legend=1 00:05:01.317 --rc geninfo_all_blocks=1 00:05:01.317 --rc geninfo_unexecuted_blocks=1 00:05:01.317 00:05:01.317 ' 00:05:01.317 11:47:59 rpc -- rpc/rpc.sh@65 -- # spdk_pid=68815 00:05:01.317 11:47:59 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:01.317 11:47:59 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.317 11:47:59 rpc -- rpc/rpc.sh@67 -- # waitforlisten 68815 00:05:01.317 11:47:59 rpc -- common/autotest_common.sh@831 -- # '[' -z 68815 ']' 00:05:01.317 11:47:59 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.317 11:47:59 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:01.317 11:47:59 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.317 11:47:59 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:01.317 11:47:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.577 [2024-12-06 11:47:59.142260] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:01.577 [2024-12-06 11:47:59.142413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68815 ] 00:05:01.577 [2024-12-06 11:47:59.288850] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.837 [2024-12-06 11:47:59.333745] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:01.837 [2024-12-06 11:47:59.333820] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 68815' to capture a snapshot of events at runtime. 00:05:01.837 [2024-12-06 11:47:59.333832] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:01.837 [2024-12-06 11:47:59.333839] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:01.837 [2024-12-06 11:47:59.333859] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid68815 for offline analysis/debug. 00:05:01.837 [2024-12-06 11:47:59.333892] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.407 11:47:59 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:02.407 11:47:59 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:02.407 11:47:59 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:02.407 11:47:59 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:02.407 11:47:59 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:02.407 11:47:59 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:02.407 11:47:59 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.407 11:47:59 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.407 11:47:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.407 ************************************ 00:05:02.407 START TEST rpc_integrity 00:05:02.407 ************************************ 00:05:02.407 11:47:59 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:02.407 11:47:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:02.407 11:47:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.407 11:47:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.407 11:47:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.407 11:47:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:02.407 11:47:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:02.407 11:48:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:02.407 11:48:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:02.407 11:48:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.407 11:48:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.407 11:48:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.407 11:48:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:02.407 11:48:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:02.407 11:48:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.407 11:48:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.407 11:48:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.407 11:48:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:02.407 { 00:05:02.407 "name": "Malloc0", 00:05:02.407 "aliases": [ 00:05:02.407 "9af290ba-5af2-4884-86cb-22b038a1cfe1" 00:05:02.407 ], 00:05:02.407 "product_name": "Malloc disk", 00:05:02.407 "block_size": 512, 00:05:02.407 "num_blocks": 16384, 00:05:02.407 "uuid": "9af290ba-5af2-4884-86cb-22b038a1cfe1", 00:05:02.407 "assigned_rate_limits": { 00:05:02.407 "rw_ios_per_sec": 0, 00:05:02.407 "rw_mbytes_per_sec": 0, 00:05:02.407 "r_mbytes_per_sec": 0, 00:05:02.407 "w_mbytes_per_sec": 0 00:05:02.407 }, 00:05:02.407 "claimed": false, 00:05:02.407 "zoned": false, 00:05:02.407 "supported_io_types": { 00:05:02.407 "read": true, 00:05:02.407 "write": true, 00:05:02.407 "unmap": true, 00:05:02.407 "flush": true, 00:05:02.407 "reset": true, 00:05:02.407 "nvme_admin": false, 00:05:02.407 "nvme_io": false, 00:05:02.407 "nvme_io_md": false, 00:05:02.407 "write_zeroes": true, 00:05:02.407 "zcopy": true, 00:05:02.407 "get_zone_info": false, 00:05:02.407 "zone_management": false, 00:05:02.407 "zone_append": false, 00:05:02.407 "compare": false, 00:05:02.407 "compare_and_write": false, 00:05:02.407 "abort": true, 00:05:02.407 "seek_hole": false, 00:05:02.407 "seek_data": false, 00:05:02.407 "copy": true, 00:05:02.407 "nvme_iov_md": false 00:05:02.407 }, 00:05:02.407 "memory_domains": [ 00:05:02.407 { 00:05:02.407 "dma_device_id": "system", 00:05:02.407 "dma_device_type": 1 00:05:02.407 }, 00:05:02.407 { 00:05:02.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.407 "dma_device_type": 2 00:05:02.407 } 00:05:02.407 ], 00:05:02.407 "driver_specific": {} 00:05:02.407 } 00:05:02.407 ]' 00:05:02.407 11:48:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:02.407 11:48:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:02.407 11:48:00 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:02.407 11:48:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.407 11:48:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.407 [2024-12-06 11:48:00.124610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:02.407 [2024-12-06 11:48:00.124696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:02.407 [2024-12-06 11:48:00.124740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:05:02.407 [2024-12-06 11:48:00.124752] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:02.407 [2024-12-06 11:48:00.127202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:02.407 [2024-12-06 11:48:00.127254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:02.407 Passthru0 00:05:02.407 11:48:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.407 11:48:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:02.407 11:48:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.407 11:48:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.407 11:48:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.407 11:48:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:02.407 { 00:05:02.407 "name": "Malloc0", 00:05:02.407 "aliases": [ 00:05:02.407 "9af290ba-5af2-4884-86cb-22b038a1cfe1" 00:05:02.407 ], 00:05:02.407 "product_name": "Malloc disk", 00:05:02.407 "block_size": 512, 00:05:02.407 "num_blocks": 16384, 00:05:02.407 "uuid": "9af290ba-5af2-4884-86cb-22b038a1cfe1", 00:05:02.407 "assigned_rate_limits": { 00:05:02.407 "rw_ios_per_sec": 0, 00:05:02.407 "rw_mbytes_per_sec": 0, 00:05:02.407 "r_mbytes_per_sec": 0, 00:05:02.407 "w_mbytes_per_sec": 0 00:05:02.407 }, 00:05:02.407 "claimed": true, 00:05:02.407 "claim_type": "exclusive_write", 00:05:02.407 "zoned": false, 00:05:02.407 "supported_io_types": { 00:05:02.407 "read": true, 00:05:02.407 "write": true, 00:05:02.407 "unmap": true, 00:05:02.407 "flush": true, 00:05:02.407 "reset": true, 00:05:02.407 "nvme_admin": false, 00:05:02.407 "nvme_io": false, 00:05:02.407 "nvme_io_md": false, 00:05:02.407 "write_zeroes": true, 00:05:02.407 "zcopy": true, 00:05:02.407 "get_zone_info": false, 00:05:02.407 "zone_management": false, 00:05:02.407 "zone_append": false, 00:05:02.407 "compare": false, 00:05:02.407 "compare_and_write": false, 00:05:02.407 "abort": true, 00:05:02.407 "seek_hole": false, 00:05:02.407 "seek_data": false, 00:05:02.407 "copy": true, 00:05:02.407 "nvme_iov_md": false 00:05:02.407 }, 00:05:02.407 "memory_domains": [ 00:05:02.407 { 00:05:02.407 "dma_device_id": "system", 00:05:02.407 "dma_device_type": 1 00:05:02.407 }, 00:05:02.407 { 00:05:02.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.407 "dma_device_type": 2 00:05:02.407 } 00:05:02.407 ], 00:05:02.407 "driver_specific": {} 00:05:02.407 }, 00:05:02.407 { 00:05:02.407 "name": "Passthru0", 00:05:02.407 "aliases": [ 00:05:02.407 "d196e076-51fe-5bb5-903a-d972d43c4e06" 00:05:02.407 ], 00:05:02.407 "product_name": "passthru", 00:05:02.407 "block_size": 512, 00:05:02.407 "num_blocks": 16384, 00:05:02.407 "uuid": "d196e076-51fe-5bb5-903a-d972d43c4e06", 00:05:02.407 "assigned_rate_limits": { 00:05:02.407 "rw_ios_per_sec": 0, 00:05:02.407 "rw_mbytes_per_sec": 0, 00:05:02.407 "r_mbytes_per_sec": 0, 00:05:02.407 "w_mbytes_per_sec": 0 00:05:02.407 }, 00:05:02.407 "claimed": false, 00:05:02.407 "zoned": false, 00:05:02.407 "supported_io_types": { 00:05:02.407 "read": true, 00:05:02.407 "write": true, 00:05:02.407 "unmap": true, 00:05:02.407 "flush": true, 00:05:02.407 "reset": true, 00:05:02.407 "nvme_admin": false, 00:05:02.407 "nvme_io": false, 00:05:02.407 "nvme_io_md": false, 00:05:02.407 "write_zeroes": true, 00:05:02.407 "zcopy": true, 00:05:02.407 "get_zone_info": false, 00:05:02.407 "zone_management": false, 00:05:02.407 "zone_append": false, 00:05:02.407 "compare": false, 00:05:02.407 "compare_and_write": false, 00:05:02.407 "abort": true, 00:05:02.407 "seek_hole": false, 00:05:02.407 "seek_data": false, 00:05:02.407 "copy": true, 00:05:02.407 "nvme_iov_md": false 00:05:02.407 }, 00:05:02.407 "memory_domains": [ 00:05:02.407 { 00:05:02.407 "dma_device_id": "system", 00:05:02.407 "dma_device_type": 1 00:05:02.407 }, 00:05:02.407 { 00:05:02.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.407 "dma_device_type": 2 00:05:02.407 } 00:05:02.407 ], 00:05:02.407 "driver_specific": { 00:05:02.407 "passthru": { 00:05:02.407 "name": "Passthru0", 00:05:02.407 "base_bdev_name": "Malloc0" 00:05:02.407 } 00:05:02.407 } 00:05:02.407 } 00:05:02.407 ]' 00:05:02.407 11:48:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:02.667 11:48:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:02.667 11:48:00 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:02.667 11:48:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.667 11:48:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.667 11:48:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.667 11:48:00 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:02.667 11:48:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.667 11:48:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.667 11:48:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.667 11:48:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:02.667 11:48:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.667 11:48:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.667 11:48:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.667 11:48:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:02.667 11:48:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:02.667 11:48:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:02.667 00:05:02.667 real 0m0.308s 00:05:02.667 user 0m0.186s 00:05:02.667 sys 0m0.059s 00:05:02.667 11:48:00 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.667 11:48:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.667 ************************************ 00:05:02.667 END TEST rpc_integrity 00:05:02.667 ************************************ 00:05:02.667 11:48:00 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:02.667 11:48:00 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.667 11:48:00 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.667 11:48:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.667 ************************************ 00:05:02.667 START TEST rpc_plugins 00:05:02.667 ************************************ 00:05:02.667 11:48:00 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:02.667 11:48:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:02.667 11:48:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.667 11:48:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.667 11:48:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.667 11:48:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:02.667 11:48:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:02.667 11:48:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.667 11:48:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.667 11:48:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.667 11:48:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:02.667 { 00:05:02.667 "name": "Malloc1", 00:05:02.667 "aliases": [ 00:05:02.667 "60847edf-77d6-4d73-baed-802e7a31df15" 00:05:02.667 ], 00:05:02.667 "product_name": "Malloc disk", 00:05:02.667 "block_size": 4096, 00:05:02.667 "num_blocks": 256, 00:05:02.667 "uuid": "60847edf-77d6-4d73-baed-802e7a31df15", 00:05:02.667 "assigned_rate_limits": { 00:05:02.667 "rw_ios_per_sec": 0, 00:05:02.667 "rw_mbytes_per_sec": 0, 00:05:02.667 "r_mbytes_per_sec": 0, 00:05:02.667 "w_mbytes_per_sec": 0 00:05:02.667 }, 00:05:02.667 "claimed": false, 00:05:02.667 "zoned": false, 00:05:02.667 "supported_io_types": { 00:05:02.667 "read": true, 00:05:02.667 "write": true, 00:05:02.667 "unmap": true, 00:05:02.667 "flush": true, 00:05:02.667 "reset": true, 00:05:02.667 "nvme_admin": false, 00:05:02.667 "nvme_io": false, 00:05:02.667 "nvme_io_md": false, 00:05:02.667 "write_zeroes": true, 00:05:02.667 "zcopy": true, 00:05:02.667 "get_zone_info": false, 00:05:02.667 "zone_management": false, 00:05:02.667 "zone_append": false, 00:05:02.667 "compare": false, 00:05:02.667 "compare_and_write": false, 00:05:02.667 "abort": true, 00:05:02.667 "seek_hole": false, 00:05:02.667 "seek_data": false, 00:05:02.667 "copy": true, 00:05:02.667 "nvme_iov_md": false 00:05:02.667 }, 00:05:02.667 "memory_domains": [ 00:05:02.667 { 00:05:02.667 "dma_device_id": "system", 00:05:02.667 "dma_device_type": 1 00:05:02.667 }, 00:05:02.667 { 00:05:02.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.667 "dma_device_type": 2 00:05:02.667 } 00:05:02.667 ], 00:05:02.667 "driver_specific": {} 00:05:02.667 } 00:05:02.667 ]' 00:05:02.667 11:48:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:02.927 11:48:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:02.927 11:48:00 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:02.927 11:48:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.927 11:48:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.927 11:48:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.927 11:48:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:02.927 11:48:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.927 11:48:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.927 11:48:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.927 11:48:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:02.927 11:48:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:02.927 11:48:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:02.927 00:05:02.927 real 0m0.169s 00:05:02.927 user 0m0.098s 00:05:02.927 sys 0m0.029s 00:05:02.927 11:48:00 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.927 11:48:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.927 ************************************ 00:05:02.927 END TEST rpc_plugins 00:05:02.927 ************************************ 00:05:02.927 11:48:00 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:02.927 11:48:00 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.927 11:48:00 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.927 11:48:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.927 ************************************ 00:05:02.927 START TEST rpc_trace_cmd_test 00:05:02.927 ************************************ 00:05:02.927 11:48:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:02.927 11:48:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:02.927 11:48:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:02.927 11:48:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.927 11:48:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:02.927 11:48:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.927 11:48:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:02.927 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid68815", 00:05:02.927 "tpoint_group_mask": "0x8", 00:05:02.927 "iscsi_conn": { 00:05:02.927 "mask": "0x2", 00:05:02.927 "tpoint_mask": "0x0" 00:05:02.928 }, 00:05:02.928 "scsi": { 00:05:02.928 "mask": "0x4", 00:05:02.928 "tpoint_mask": "0x0" 00:05:02.928 }, 00:05:02.928 "bdev": { 00:05:02.928 "mask": "0x8", 00:05:02.928 "tpoint_mask": "0xffffffffffffffff" 00:05:02.928 }, 00:05:02.928 "nvmf_rdma": { 00:05:02.928 "mask": "0x10", 00:05:02.928 "tpoint_mask": "0x0" 00:05:02.928 }, 00:05:02.928 "nvmf_tcp": { 00:05:02.928 "mask": "0x20", 00:05:02.928 "tpoint_mask": "0x0" 00:05:02.928 }, 00:05:02.928 "ftl": { 00:05:02.928 "mask": "0x40", 00:05:02.928 "tpoint_mask": "0x0" 00:05:02.928 }, 00:05:02.928 "blobfs": { 00:05:02.928 "mask": "0x80", 00:05:02.928 "tpoint_mask": "0x0" 00:05:02.928 }, 00:05:02.928 "dsa": { 00:05:02.928 "mask": "0x200", 00:05:02.928 "tpoint_mask": "0x0" 00:05:02.928 }, 00:05:02.928 "thread": { 00:05:02.928 "mask": "0x400", 00:05:02.928 "tpoint_mask": "0x0" 00:05:02.928 }, 00:05:02.928 "nvme_pcie": { 00:05:02.928 "mask": "0x800", 00:05:02.928 "tpoint_mask": "0x0" 00:05:02.928 }, 00:05:02.928 "iaa": { 00:05:02.928 "mask": "0x1000", 00:05:02.928 "tpoint_mask": "0x0" 00:05:02.928 }, 00:05:02.928 "nvme_tcp": { 00:05:02.928 "mask": "0x2000", 00:05:02.928 "tpoint_mask": "0x0" 00:05:02.928 }, 00:05:02.928 "bdev_nvme": { 00:05:02.928 "mask": "0x4000", 00:05:02.928 "tpoint_mask": "0x0" 00:05:02.928 }, 00:05:02.928 "sock": { 00:05:02.928 "mask": "0x8000", 00:05:02.928 "tpoint_mask": "0x0" 00:05:02.928 }, 00:05:02.928 "blob": { 00:05:02.928 "mask": "0x10000", 00:05:02.928 "tpoint_mask": "0x0" 00:05:02.928 }, 00:05:02.928 "bdev_raid": { 00:05:02.928 "mask": "0x20000", 00:05:02.928 "tpoint_mask": "0x0" 00:05:02.928 } 00:05:02.928 }' 00:05:02.928 11:48:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:02.928 11:48:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:02.928 11:48:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:02.928 11:48:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:02.928 11:48:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:03.187 11:48:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:03.187 11:48:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:03.187 11:48:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:03.187 11:48:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:03.187 11:48:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:03.187 00:05:03.187 real 0m0.222s 00:05:03.187 user 0m0.169s 00:05:03.187 sys 0m0.041s 00:05:03.187 11:48:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.187 11:48:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:03.187 ************************************ 00:05:03.187 END TEST rpc_trace_cmd_test 00:05:03.187 ************************************ 00:05:03.187 11:48:00 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:03.187 11:48:00 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:03.187 11:48:00 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:03.187 11:48:00 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.187 11:48:00 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.187 11:48:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.187 ************************************ 00:05:03.187 START TEST rpc_daemon_integrity 00:05:03.187 ************************************ 00:05:03.187 11:48:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:03.187 11:48:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:03.187 11:48:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.187 11:48:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.187 11:48:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.187 11:48:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:03.187 11:48:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:03.187 11:48:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:03.187 11:48:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:03.187 11:48:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.187 11:48:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.187 11:48:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.187 11:48:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:03.187 11:48:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:03.187 11:48:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.187 11:48:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.448 11:48:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.448 11:48:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:03.448 { 00:05:03.448 "name": "Malloc2", 00:05:03.448 "aliases": [ 00:05:03.448 "6d6c2e4a-78b7-4a38-b91b-1073094489d8" 00:05:03.448 ], 00:05:03.448 "product_name": "Malloc disk", 00:05:03.448 "block_size": 512, 00:05:03.448 "num_blocks": 16384, 00:05:03.448 "uuid": "6d6c2e4a-78b7-4a38-b91b-1073094489d8", 00:05:03.448 "assigned_rate_limits": { 00:05:03.448 "rw_ios_per_sec": 0, 00:05:03.448 "rw_mbytes_per_sec": 0, 00:05:03.448 "r_mbytes_per_sec": 0, 00:05:03.448 "w_mbytes_per_sec": 0 00:05:03.448 }, 00:05:03.448 "claimed": false, 00:05:03.448 "zoned": false, 00:05:03.448 "supported_io_types": { 00:05:03.448 "read": true, 00:05:03.448 "write": true, 00:05:03.448 "unmap": true, 00:05:03.448 "flush": true, 00:05:03.448 "reset": true, 00:05:03.448 "nvme_admin": false, 00:05:03.448 "nvme_io": false, 00:05:03.448 "nvme_io_md": false, 00:05:03.448 "write_zeroes": true, 00:05:03.448 "zcopy": true, 00:05:03.448 "get_zone_info": false, 00:05:03.448 "zone_management": false, 00:05:03.448 "zone_append": false, 00:05:03.448 "compare": false, 00:05:03.448 "compare_and_write": false, 00:05:03.448 "abort": true, 00:05:03.448 "seek_hole": false, 00:05:03.448 "seek_data": false, 00:05:03.448 "copy": true, 00:05:03.448 "nvme_iov_md": false 00:05:03.448 }, 00:05:03.448 "memory_domains": [ 00:05:03.448 { 00:05:03.448 "dma_device_id": "system", 00:05:03.448 "dma_device_type": 1 00:05:03.448 }, 00:05:03.448 { 00:05:03.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.448 "dma_device_type": 2 00:05:03.448 } 00:05:03.448 ], 00:05:03.448 "driver_specific": {} 00:05:03.448 } 00:05:03.448 ]' 00:05:03.448 11:48:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:03.448 11:48:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:03.448 11:48:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:03.448 11:48:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.448 11:48:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.448 [2024-12-06 11:48:01.015753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:03.448 [2024-12-06 11:48:01.015835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:03.448 [2024-12-06 11:48:01.015861] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:05:03.448 [2024-12-06 11:48:01.015871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:03.448 [2024-12-06 11:48:01.018218] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:03.448 [2024-12-06 11:48:01.018266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:03.448 Passthru0 00:05:03.448 11:48:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.448 11:48:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:03.448 11:48:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.448 11:48:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.448 11:48:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.448 11:48:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:03.448 { 00:05:03.448 "name": "Malloc2", 00:05:03.448 "aliases": [ 00:05:03.448 "6d6c2e4a-78b7-4a38-b91b-1073094489d8" 00:05:03.448 ], 00:05:03.448 "product_name": "Malloc disk", 00:05:03.448 "block_size": 512, 00:05:03.448 "num_blocks": 16384, 00:05:03.448 "uuid": "6d6c2e4a-78b7-4a38-b91b-1073094489d8", 00:05:03.448 "assigned_rate_limits": { 00:05:03.448 "rw_ios_per_sec": 0, 00:05:03.448 "rw_mbytes_per_sec": 0, 00:05:03.448 "r_mbytes_per_sec": 0, 00:05:03.448 "w_mbytes_per_sec": 0 00:05:03.448 }, 00:05:03.448 "claimed": true, 00:05:03.448 "claim_type": "exclusive_write", 00:05:03.448 "zoned": false, 00:05:03.448 "supported_io_types": { 00:05:03.448 "read": true, 00:05:03.448 "write": true, 00:05:03.448 "unmap": true, 00:05:03.448 "flush": true, 00:05:03.448 "reset": true, 00:05:03.448 "nvme_admin": false, 00:05:03.448 "nvme_io": false, 00:05:03.448 "nvme_io_md": false, 00:05:03.448 "write_zeroes": true, 00:05:03.448 "zcopy": true, 00:05:03.448 "get_zone_info": false, 00:05:03.448 "zone_management": false, 00:05:03.448 "zone_append": false, 00:05:03.448 "compare": false, 00:05:03.448 "compare_and_write": false, 00:05:03.448 "abort": true, 00:05:03.448 "seek_hole": false, 00:05:03.448 "seek_data": false, 00:05:03.448 "copy": true, 00:05:03.448 "nvme_iov_md": false 00:05:03.448 }, 00:05:03.448 "memory_domains": [ 00:05:03.448 { 00:05:03.448 "dma_device_id": "system", 00:05:03.448 "dma_device_type": 1 00:05:03.448 }, 00:05:03.448 { 00:05:03.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.448 "dma_device_type": 2 00:05:03.448 } 00:05:03.448 ], 00:05:03.448 "driver_specific": {} 00:05:03.448 }, 00:05:03.448 { 00:05:03.448 "name": "Passthru0", 00:05:03.448 "aliases": [ 00:05:03.448 "ed515184-f25b-59e1-a8d1-4027d9a2bb14" 00:05:03.448 ], 00:05:03.448 "product_name": "passthru", 00:05:03.448 "block_size": 512, 00:05:03.448 "num_blocks": 16384, 00:05:03.448 "uuid": "ed515184-f25b-59e1-a8d1-4027d9a2bb14", 00:05:03.448 "assigned_rate_limits": { 00:05:03.448 "rw_ios_per_sec": 0, 00:05:03.448 "rw_mbytes_per_sec": 0, 00:05:03.448 "r_mbytes_per_sec": 0, 00:05:03.448 "w_mbytes_per_sec": 0 00:05:03.448 }, 00:05:03.448 "claimed": false, 00:05:03.448 "zoned": false, 00:05:03.448 "supported_io_types": { 00:05:03.448 "read": true, 00:05:03.448 "write": true, 00:05:03.449 "unmap": true, 00:05:03.449 "flush": true, 00:05:03.449 "reset": true, 00:05:03.449 "nvme_admin": false, 00:05:03.449 "nvme_io": false, 00:05:03.449 "nvme_io_md": false, 00:05:03.449 "write_zeroes": true, 00:05:03.449 "zcopy": true, 00:05:03.449 "get_zone_info": false, 00:05:03.449 "zone_management": false, 00:05:03.449 "zone_append": false, 00:05:03.449 "compare": false, 00:05:03.449 "compare_and_write": false, 00:05:03.449 "abort": true, 00:05:03.449 "seek_hole": false, 00:05:03.449 "seek_data": false, 00:05:03.449 "copy": true, 00:05:03.449 "nvme_iov_md": false 00:05:03.449 }, 00:05:03.449 "memory_domains": [ 00:05:03.449 { 00:05:03.449 "dma_device_id": "system", 00:05:03.449 "dma_device_type": 1 00:05:03.449 }, 00:05:03.449 { 00:05:03.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.449 "dma_device_type": 2 00:05:03.449 } 00:05:03.449 ], 00:05:03.449 "driver_specific": { 00:05:03.449 "passthru": { 00:05:03.449 "name": "Passthru0", 00:05:03.449 "base_bdev_name": "Malloc2" 00:05:03.449 } 00:05:03.449 } 00:05:03.449 } 00:05:03.449 ]' 00:05:03.449 11:48:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:03.449 11:48:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:03.449 11:48:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:03.449 11:48:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.449 11:48:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.449 11:48:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.449 11:48:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:03.449 11:48:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.449 11:48:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.449 11:48:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.449 11:48:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:03.449 11:48:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.449 11:48:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.449 11:48:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.449 11:48:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:03.449 11:48:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:03.449 11:48:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:03.449 00:05:03.449 real 0m0.325s 00:05:03.449 user 0m0.202s 00:05:03.449 sys 0m0.056s 00:05:03.449 11:48:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.449 11:48:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.449 ************************************ 00:05:03.449 END TEST rpc_daemon_integrity 00:05:03.449 ************************************ 00:05:03.709 11:48:01 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:03.709 11:48:01 rpc -- rpc/rpc.sh@84 -- # killprocess 68815 00:05:03.709 11:48:01 rpc -- common/autotest_common.sh@950 -- # '[' -z 68815 ']' 00:05:03.709 11:48:01 rpc -- common/autotest_common.sh@954 -- # kill -0 68815 00:05:03.709 11:48:01 rpc -- common/autotest_common.sh@955 -- # uname 00:05:03.709 11:48:01 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:03.709 11:48:01 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68815 00:05:03.709 11:48:01 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:03.709 11:48:01 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:03.709 killing process with pid 68815 00:05:03.709 11:48:01 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68815' 00:05:03.709 11:48:01 rpc -- common/autotest_common.sh@969 -- # kill 68815 00:05:03.709 11:48:01 rpc -- common/autotest_common.sh@974 -- # wait 68815 00:05:03.970 00:05:03.970 real 0m2.849s 00:05:03.970 user 0m3.425s 00:05:03.970 sys 0m0.858s 00:05:03.970 11:48:01 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.970 11:48:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.970 ************************************ 00:05:03.970 END TEST rpc 00:05:03.970 ************************************ 00:05:04.230 11:48:01 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:04.230 11:48:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.230 11:48:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.230 11:48:01 -- common/autotest_common.sh@10 -- # set +x 00:05:04.230 ************************************ 00:05:04.230 START TEST skip_rpc 00:05:04.230 ************************************ 00:05:04.230 11:48:01 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:04.230 * Looking for test storage... 00:05:04.230 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:04.230 11:48:01 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:04.230 11:48:01 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:04.230 11:48:01 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:04.230 11:48:01 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:04.230 11:48:01 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.230 11:48:01 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.230 11:48:01 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.230 11:48:01 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.230 11:48:01 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.230 11:48:01 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.230 11:48:01 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.230 11:48:01 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.230 11:48:01 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.230 11:48:01 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.230 11:48:01 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.230 11:48:01 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:04.230 11:48:01 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:04.230 11:48:01 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.230 11:48:01 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.230 11:48:01 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:04.230 11:48:01 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:04.230 11:48:01 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.230 11:48:01 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:04.230 11:48:01 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.230 11:48:01 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:04.230 11:48:01 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:04.230 11:48:01 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.230 11:48:01 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:04.230 11:48:01 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.230 11:48:01 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.230 11:48:01 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.230 11:48:01 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:04.230 11:48:01 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.230 11:48:01 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:04.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.230 --rc genhtml_branch_coverage=1 00:05:04.230 --rc genhtml_function_coverage=1 00:05:04.230 --rc genhtml_legend=1 00:05:04.230 --rc geninfo_all_blocks=1 00:05:04.230 --rc geninfo_unexecuted_blocks=1 00:05:04.230 00:05:04.230 ' 00:05:04.230 11:48:01 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:04.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.230 --rc genhtml_branch_coverage=1 00:05:04.230 --rc genhtml_function_coverage=1 00:05:04.230 --rc genhtml_legend=1 00:05:04.230 --rc geninfo_all_blocks=1 00:05:04.230 --rc geninfo_unexecuted_blocks=1 00:05:04.230 00:05:04.231 ' 00:05:04.231 11:48:01 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:04.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.231 --rc genhtml_branch_coverage=1 00:05:04.231 --rc genhtml_function_coverage=1 00:05:04.231 --rc genhtml_legend=1 00:05:04.231 --rc geninfo_all_blocks=1 00:05:04.231 --rc geninfo_unexecuted_blocks=1 00:05:04.231 00:05:04.231 ' 00:05:04.231 11:48:01 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:04.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.231 --rc genhtml_branch_coverage=1 00:05:04.231 --rc genhtml_function_coverage=1 00:05:04.231 --rc genhtml_legend=1 00:05:04.231 --rc geninfo_all_blocks=1 00:05:04.231 --rc geninfo_unexecuted_blocks=1 00:05:04.231 00:05:04.231 ' 00:05:04.231 11:48:01 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:04.231 11:48:01 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:04.231 11:48:01 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:04.231 11:48:01 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.231 11:48:01 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.231 11:48:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.231 ************************************ 00:05:04.231 START TEST skip_rpc 00:05:04.231 ************************************ 00:05:04.231 11:48:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:04.231 11:48:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69022 00:05:04.231 11:48:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:04.231 11:48:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.231 11:48:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:04.492 [2024-12-06 11:48:02.067053] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:04.492 [2024-12-06 11:48:02.067181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69022 ] 00:05:04.492 [2024-12-06 11:48:02.213227] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.752 [2024-12-06 11:48:02.265440] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.061 11:48:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:10.061 11:48:06 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:10.061 11:48:06 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:10.061 11:48:06 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:10.061 11:48:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:10.061 11:48:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:10.061 11:48:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:10.061 11:48:06 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:10.061 11:48:06 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.061 11:48:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.061 11:48:06 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:10.061 11:48:06 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:10.061 11:48:06 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:10.061 11:48:06 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:10.061 11:48:06 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:10.061 11:48:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:10.061 11:48:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69022 00:05:10.061 11:48:06 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69022 ']' 00:05:10.061 11:48:06 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69022 00:05:10.061 11:48:06 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:10.061 11:48:07 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:10.061 11:48:07 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69022 00:05:10.061 11:48:07 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:10.061 11:48:07 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:10.061 killing process with pid 69022 00:05:10.061 11:48:07 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69022' 00:05:10.061 11:48:07 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69022 00:05:10.061 11:48:07 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69022 00:05:10.061 00:05:10.061 real 0m5.449s 00:05:10.061 user 0m5.056s 00:05:10.061 sys 0m0.320s 00:05:10.061 11:48:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.061 11:48:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.061 ************************************ 00:05:10.061 END TEST skip_rpc 00:05:10.061 ************************************ 00:05:10.061 11:48:07 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:10.061 11:48:07 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.061 11:48:07 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.061 11:48:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.061 ************************************ 00:05:10.061 START TEST skip_rpc_with_json 00:05:10.061 ************************************ 00:05:10.061 11:48:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:10.061 11:48:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:10.061 11:48:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69108 00:05:10.061 11:48:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.061 11:48:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.061 11:48:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69108 00:05:10.061 11:48:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69108 ']' 00:05:10.061 11:48:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.061 11:48:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.061 11:48:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.061 11:48:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.061 11:48:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.061 [2024-12-06 11:48:07.582926] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:10.061 [2024-12-06 11:48:07.583083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69108 ] 00:05:10.061 [2024-12-06 11:48:07.728260] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.061 [2024-12-06 11:48:07.779612] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.002 11:48:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.002 11:48:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:11.002 11:48:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:11.002 11:48:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.002 11:48:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.002 [2024-12-06 11:48:08.435825] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:11.002 request: 00:05:11.002 { 00:05:11.002 "trtype": "tcp", 00:05:11.002 "method": "nvmf_get_transports", 00:05:11.002 "req_id": 1 00:05:11.002 } 00:05:11.002 Got JSON-RPC error response 00:05:11.002 response: 00:05:11.002 { 00:05:11.002 "code": -19, 00:05:11.002 "message": "No such device" 00:05:11.002 } 00:05:11.002 11:48:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:11.002 11:48:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:11.002 11:48:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.002 11:48:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.002 [2024-12-06 11:48:08.447909] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:11.002 11:48:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.002 11:48:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:11.002 11:48:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.002 11:48:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.002 11:48:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.002 11:48:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:11.002 { 00:05:11.002 "subsystems": [ 00:05:11.002 { 00:05:11.002 "subsystem": "fsdev", 00:05:11.002 "config": [ 00:05:11.002 { 00:05:11.002 "method": "fsdev_set_opts", 00:05:11.002 "params": { 00:05:11.002 "fsdev_io_pool_size": 65535, 00:05:11.002 "fsdev_io_cache_size": 256 00:05:11.002 } 00:05:11.002 } 00:05:11.002 ] 00:05:11.002 }, 00:05:11.002 { 00:05:11.002 "subsystem": "keyring", 00:05:11.002 "config": [] 00:05:11.002 }, 00:05:11.002 { 00:05:11.002 "subsystem": "iobuf", 00:05:11.002 "config": [ 00:05:11.002 { 00:05:11.002 "method": "iobuf_set_options", 00:05:11.002 "params": { 00:05:11.002 "small_pool_count": 8192, 00:05:11.002 "large_pool_count": 1024, 00:05:11.002 "small_bufsize": 8192, 00:05:11.002 "large_bufsize": 135168 00:05:11.002 } 00:05:11.002 } 00:05:11.002 ] 00:05:11.002 }, 00:05:11.002 { 00:05:11.002 "subsystem": "sock", 00:05:11.002 "config": [ 00:05:11.002 { 00:05:11.002 "method": "sock_set_default_impl", 00:05:11.002 "params": { 00:05:11.002 "impl_name": "posix" 00:05:11.002 } 00:05:11.002 }, 00:05:11.002 { 00:05:11.002 "method": "sock_impl_set_options", 00:05:11.002 "params": { 00:05:11.002 "impl_name": "ssl", 00:05:11.002 "recv_buf_size": 4096, 00:05:11.002 "send_buf_size": 4096, 00:05:11.002 "enable_recv_pipe": true, 00:05:11.002 "enable_quickack": false, 00:05:11.002 "enable_placement_id": 0, 00:05:11.002 "enable_zerocopy_send_server": true, 00:05:11.002 "enable_zerocopy_send_client": false, 00:05:11.002 "zerocopy_threshold": 0, 00:05:11.002 "tls_version": 0, 00:05:11.002 "enable_ktls": false 00:05:11.002 } 00:05:11.002 }, 00:05:11.002 { 00:05:11.002 "method": "sock_impl_set_options", 00:05:11.002 "params": { 00:05:11.002 "impl_name": "posix", 00:05:11.002 "recv_buf_size": 2097152, 00:05:11.002 "send_buf_size": 2097152, 00:05:11.002 "enable_recv_pipe": true, 00:05:11.002 "enable_quickack": false, 00:05:11.002 "enable_placement_id": 0, 00:05:11.002 "enable_zerocopy_send_server": true, 00:05:11.002 "enable_zerocopy_send_client": false, 00:05:11.002 "zerocopy_threshold": 0, 00:05:11.002 "tls_version": 0, 00:05:11.002 "enable_ktls": false 00:05:11.002 } 00:05:11.002 } 00:05:11.002 ] 00:05:11.002 }, 00:05:11.002 { 00:05:11.002 "subsystem": "vmd", 00:05:11.002 "config": [] 00:05:11.002 }, 00:05:11.002 { 00:05:11.002 "subsystem": "accel", 00:05:11.002 "config": [ 00:05:11.002 { 00:05:11.002 "method": "accel_set_options", 00:05:11.002 "params": { 00:05:11.002 "small_cache_size": 128, 00:05:11.002 "large_cache_size": 16, 00:05:11.002 "task_count": 2048, 00:05:11.002 "sequence_count": 2048, 00:05:11.002 "buf_count": 2048 00:05:11.002 } 00:05:11.002 } 00:05:11.002 ] 00:05:11.002 }, 00:05:11.002 { 00:05:11.002 "subsystem": "bdev", 00:05:11.002 "config": [ 00:05:11.002 { 00:05:11.002 "method": "bdev_set_options", 00:05:11.002 "params": { 00:05:11.002 "bdev_io_pool_size": 65535, 00:05:11.002 "bdev_io_cache_size": 256, 00:05:11.002 "bdev_auto_examine": true, 00:05:11.002 "iobuf_small_cache_size": 128, 00:05:11.002 "iobuf_large_cache_size": 16 00:05:11.002 } 00:05:11.002 }, 00:05:11.002 { 00:05:11.002 "method": "bdev_raid_set_options", 00:05:11.002 "params": { 00:05:11.002 "process_window_size_kb": 1024, 00:05:11.002 "process_max_bandwidth_mb_sec": 0 00:05:11.002 } 00:05:11.002 }, 00:05:11.002 { 00:05:11.002 "method": "bdev_iscsi_set_options", 00:05:11.002 "params": { 00:05:11.002 "timeout_sec": 30 00:05:11.002 } 00:05:11.002 }, 00:05:11.002 { 00:05:11.002 "method": "bdev_nvme_set_options", 00:05:11.002 "params": { 00:05:11.002 "action_on_timeout": "none", 00:05:11.002 "timeout_us": 0, 00:05:11.002 "timeout_admin_us": 0, 00:05:11.002 "keep_alive_timeout_ms": 10000, 00:05:11.002 "arbitration_burst": 0, 00:05:11.002 "low_priority_weight": 0, 00:05:11.002 "medium_priority_weight": 0, 00:05:11.002 "high_priority_weight": 0, 00:05:11.002 "nvme_adminq_poll_period_us": 10000, 00:05:11.002 "nvme_ioq_poll_period_us": 0, 00:05:11.002 "io_queue_requests": 0, 00:05:11.002 "delay_cmd_submit": true, 00:05:11.002 "transport_retry_count": 4, 00:05:11.002 "bdev_retry_count": 3, 00:05:11.002 "transport_ack_timeout": 0, 00:05:11.002 "ctrlr_loss_timeout_sec": 0, 00:05:11.002 "reconnect_delay_sec": 0, 00:05:11.002 "fast_io_fail_timeout_sec": 0, 00:05:11.002 "disable_auto_failback": false, 00:05:11.002 "generate_uuids": false, 00:05:11.002 "transport_tos": 0, 00:05:11.002 "nvme_error_stat": false, 00:05:11.002 "rdma_srq_size": 0, 00:05:11.002 "io_path_stat": false, 00:05:11.002 "allow_accel_sequence": false, 00:05:11.002 "rdma_max_cq_size": 0, 00:05:11.002 "rdma_cm_event_timeout_ms": 0, 00:05:11.002 "dhchap_digests": [ 00:05:11.002 "sha256", 00:05:11.002 "sha384", 00:05:11.002 "sha512" 00:05:11.002 ], 00:05:11.002 "dhchap_dhgroups": [ 00:05:11.002 "null", 00:05:11.002 "ffdhe2048", 00:05:11.002 "ffdhe3072", 00:05:11.002 "ffdhe4096", 00:05:11.002 "ffdhe6144", 00:05:11.002 "ffdhe8192" 00:05:11.002 ] 00:05:11.002 } 00:05:11.002 }, 00:05:11.002 { 00:05:11.002 "method": "bdev_nvme_set_hotplug", 00:05:11.002 "params": { 00:05:11.002 "period_us": 100000, 00:05:11.002 "enable": false 00:05:11.002 } 00:05:11.002 }, 00:05:11.002 { 00:05:11.002 "method": "bdev_wait_for_examine" 00:05:11.002 } 00:05:11.002 ] 00:05:11.002 }, 00:05:11.002 { 00:05:11.002 "subsystem": "scsi", 00:05:11.002 "config": null 00:05:11.002 }, 00:05:11.002 { 00:05:11.002 "subsystem": "scheduler", 00:05:11.002 "config": [ 00:05:11.002 { 00:05:11.002 "method": "framework_set_scheduler", 00:05:11.002 "params": { 00:05:11.002 "name": "static" 00:05:11.002 } 00:05:11.002 } 00:05:11.002 ] 00:05:11.002 }, 00:05:11.002 { 00:05:11.002 "subsystem": "vhost_scsi", 00:05:11.002 "config": [] 00:05:11.002 }, 00:05:11.002 { 00:05:11.002 "subsystem": "vhost_blk", 00:05:11.002 "config": [] 00:05:11.002 }, 00:05:11.002 { 00:05:11.002 "subsystem": "ublk", 00:05:11.002 "config": [] 00:05:11.002 }, 00:05:11.002 { 00:05:11.002 "subsystem": "nbd", 00:05:11.002 "config": [] 00:05:11.002 }, 00:05:11.002 { 00:05:11.002 "subsystem": "nvmf", 00:05:11.002 "config": [ 00:05:11.002 { 00:05:11.002 "method": "nvmf_set_config", 00:05:11.002 "params": { 00:05:11.002 "discovery_filter": "match_any", 00:05:11.002 "admin_cmd_passthru": { 00:05:11.002 "identify_ctrlr": false 00:05:11.002 }, 00:05:11.002 "dhchap_digests": [ 00:05:11.002 "sha256", 00:05:11.002 "sha384", 00:05:11.002 "sha512" 00:05:11.002 ], 00:05:11.002 "dhchap_dhgroups": [ 00:05:11.002 "null", 00:05:11.002 "ffdhe2048", 00:05:11.002 "ffdhe3072", 00:05:11.002 "ffdhe4096", 00:05:11.002 "ffdhe6144", 00:05:11.003 "ffdhe8192" 00:05:11.003 ] 00:05:11.003 } 00:05:11.003 }, 00:05:11.003 { 00:05:11.003 "method": "nvmf_set_max_subsystems", 00:05:11.003 "params": { 00:05:11.003 "max_subsystems": 1024 00:05:11.003 } 00:05:11.003 }, 00:05:11.003 { 00:05:11.003 "method": "nvmf_set_crdt", 00:05:11.003 "params": { 00:05:11.003 "crdt1": 0, 00:05:11.003 "crdt2": 0, 00:05:11.003 "crdt3": 0 00:05:11.003 } 00:05:11.003 }, 00:05:11.003 { 00:05:11.003 "method": "nvmf_create_transport", 00:05:11.003 "params": { 00:05:11.003 "trtype": "TCP", 00:05:11.003 "max_queue_depth": 128, 00:05:11.003 "max_io_qpairs_per_ctrlr": 127, 00:05:11.003 "in_capsule_data_size": 4096, 00:05:11.003 "max_io_size": 131072, 00:05:11.003 "io_unit_size": 131072, 00:05:11.003 "max_aq_depth": 128, 00:05:11.003 "num_shared_buffers": 511, 00:05:11.003 "buf_cache_size": 4294967295, 00:05:11.003 "dif_insert_or_strip": false, 00:05:11.003 "zcopy": false, 00:05:11.003 "c2h_success": true, 00:05:11.003 "sock_priority": 0, 00:05:11.003 "abort_timeout_sec": 1, 00:05:11.003 "ack_timeout": 0, 00:05:11.003 "data_wr_pool_size": 0 00:05:11.003 } 00:05:11.003 } 00:05:11.003 ] 00:05:11.003 }, 00:05:11.003 { 00:05:11.003 "subsystem": "iscsi", 00:05:11.003 "config": [ 00:05:11.003 { 00:05:11.003 "method": "iscsi_set_options", 00:05:11.003 "params": { 00:05:11.003 "node_base": "iqn.2016-06.io.spdk", 00:05:11.003 "max_sessions": 128, 00:05:11.003 "max_connections_per_session": 2, 00:05:11.003 "max_queue_depth": 64, 00:05:11.003 "default_time2wait": 2, 00:05:11.003 "default_time2retain": 20, 00:05:11.003 "first_burst_length": 8192, 00:05:11.003 "immediate_data": true, 00:05:11.003 "allow_duplicated_isid": false, 00:05:11.003 "error_recovery_level": 0, 00:05:11.003 "nop_timeout": 60, 00:05:11.003 "nop_in_interval": 30, 00:05:11.003 "disable_chap": false, 00:05:11.003 "require_chap": false, 00:05:11.003 "mutual_chap": false, 00:05:11.003 "chap_group": 0, 00:05:11.003 "max_large_datain_per_connection": 64, 00:05:11.003 "max_r2t_per_connection": 4, 00:05:11.003 "pdu_pool_size": 36864, 00:05:11.003 "immediate_data_pool_size": 16384, 00:05:11.003 "data_out_pool_size": 2048 00:05:11.003 } 00:05:11.003 } 00:05:11.003 ] 00:05:11.003 } 00:05:11.003 ] 00:05:11.003 } 00:05:11.003 11:48:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:11.003 11:48:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69108 00:05:11.003 11:48:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69108 ']' 00:05:11.003 11:48:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69108 00:05:11.003 11:48:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:11.003 11:48:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:11.003 11:48:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69108 00:05:11.003 11:48:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:11.003 11:48:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:11.003 killing process with pid 69108 00:05:11.003 11:48:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69108' 00:05:11.003 11:48:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69108 00:05:11.003 11:48:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69108 00:05:11.574 11:48:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69138 00:05:11.574 11:48:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:11.574 11:48:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:16.849 11:48:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69138 00:05:16.849 11:48:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69138 ']' 00:05:16.849 11:48:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69138 00:05:16.849 11:48:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:16.849 11:48:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:16.849 11:48:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69138 00:05:16.849 11:48:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:16.849 11:48:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:16.849 killing process with pid 69138 00:05:16.849 11:48:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69138' 00:05:16.849 11:48:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69138 00:05:16.849 11:48:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69138 00:05:16.849 11:48:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:16.849 11:48:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:16.849 00:05:16.849 real 0m7.007s 00:05:16.849 user 0m6.590s 00:05:16.849 sys 0m0.726s 00:05:16.849 11:48:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.849 11:48:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:16.849 ************************************ 00:05:16.849 END TEST skip_rpc_with_json 00:05:16.850 ************************************ 00:05:16.850 11:48:14 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:16.850 11:48:14 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.850 11:48:14 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.850 11:48:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.850 ************************************ 00:05:16.850 START TEST skip_rpc_with_delay 00:05:16.850 ************************************ 00:05:16.850 11:48:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:16.850 11:48:14 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:16.850 11:48:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:16.850 11:48:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:16.850 11:48:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.850 11:48:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:16.850 11:48:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.850 11:48:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:16.850 11:48:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.850 11:48:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:16.850 11:48:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.850 11:48:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:16.850 11:48:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:17.108 [2024-12-06 11:48:14.663087] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:17.108 [2024-12-06 11:48:14.663226] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:17.108 11:48:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:17.108 11:48:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:17.108 11:48:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:17.108 11:48:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:17.108 00:05:17.108 real 0m0.157s 00:05:17.108 user 0m0.077s 00:05:17.108 sys 0m0.078s 00:05:17.108 11:48:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.108 11:48:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:17.108 ************************************ 00:05:17.108 END TEST skip_rpc_with_delay 00:05:17.108 ************************************ 00:05:17.108 11:48:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:17.108 11:48:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:17.108 11:48:14 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:17.108 11:48:14 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.108 11:48:14 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.108 11:48:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.108 ************************************ 00:05:17.108 START TEST exit_on_failed_rpc_init 00:05:17.108 ************************************ 00:05:17.108 11:48:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:17.108 11:48:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69244 00:05:17.108 11:48:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.108 11:48:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69244 00:05:17.108 11:48:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69244 ']' 00:05:17.108 11:48:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.108 11:48:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:17.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.108 11:48:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.108 11:48:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:17.108 11:48:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:17.366 [2024-12-06 11:48:14.888813] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:17.366 [2024-12-06 11:48:14.888933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69244 ] 00:05:17.366 [2024-12-06 11:48:15.014152] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.366 [2024-12-06 11:48:15.063771] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.301 11:48:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:18.301 11:48:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:18.301 11:48:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.301 11:48:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.301 11:48:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:18.301 11:48:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.301 11:48:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:18.301 11:48:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.301 11:48:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:18.301 11:48:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.301 11:48:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:18.301 11:48:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.301 11:48:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:18.301 11:48:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:18.301 11:48:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.301 [2024-12-06 11:48:15.827087] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:18.301 [2024-12-06 11:48:15.827273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69262 ] 00:05:18.301 [2024-12-06 11:48:15.974915] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.561 [2024-12-06 11:48:16.057378] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.561 [2024-12-06 11:48:16.057532] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:18.561 [2024-12-06 11:48:16.057551] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:18.561 [2024-12-06 11:48:16.057583] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:18.561 11:48:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:18.561 11:48:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:18.561 11:48:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:18.561 11:48:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:18.561 11:48:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:18.561 11:48:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:18.561 11:48:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:18.561 11:48:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69244 00:05:18.561 11:48:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69244 ']' 00:05:18.561 11:48:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69244 00:05:18.561 11:48:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:18.561 11:48:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:18.561 11:48:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69244 00:05:18.561 11:48:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:18.561 11:48:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:18.561 killing process with pid 69244 00:05:18.561 11:48:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69244' 00:05:18.561 11:48:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69244 00:05:18.561 11:48:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69244 00:05:19.128 00:05:19.128 real 0m1.867s 00:05:19.128 user 0m2.070s 00:05:19.128 sys 0m0.539s 00:05:19.128 11:48:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.128 11:48:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:19.128 ************************************ 00:05:19.128 END TEST exit_on_failed_rpc_init 00:05:19.128 ************************************ 00:05:19.128 11:48:16 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:19.128 00:05:19.128 real 0m14.982s 00:05:19.128 user 0m14.004s 00:05:19.128 sys 0m1.975s 00:05:19.128 11:48:16 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.128 11:48:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.128 ************************************ 00:05:19.128 END TEST skip_rpc 00:05:19.128 ************************************ 00:05:19.128 11:48:16 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:19.128 11:48:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.128 11:48:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.128 11:48:16 -- common/autotest_common.sh@10 -- # set +x 00:05:19.128 ************************************ 00:05:19.128 START TEST rpc_client 00:05:19.128 ************************************ 00:05:19.128 11:48:16 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:19.387 * Looking for test storage... 00:05:19.387 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:19.387 11:48:16 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:19.387 11:48:16 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:19.387 11:48:16 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:19.387 11:48:16 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:19.387 11:48:16 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.387 11:48:16 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.387 11:48:16 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.387 11:48:16 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.387 11:48:16 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.387 11:48:16 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.387 11:48:16 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.387 11:48:16 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.387 11:48:16 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.387 11:48:16 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.387 11:48:16 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.387 11:48:16 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:19.387 11:48:16 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:19.387 11:48:16 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.387 11:48:16 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.387 11:48:16 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:19.387 11:48:16 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:19.387 11:48:16 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.388 11:48:16 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:19.388 11:48:16 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.388 11:48:17 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:19.388 11:48:17 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:19.388 11:48:17 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.388 11:48:17 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:19.388 11:48:17 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.388 11:48:17 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.388 11:48:17 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.388 11:48:17 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:19.388 11:48:17 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.388 11:48:17 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:19.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.388 --rc genhtml_branch_coverage=1 00:05:19.388 --rc genhtml_function_coverage=1 00:05:19.388 --rc genhtml_legend=1 00:05:19.388 --rc geninfo_all_blocks=1 00:05:19.388 --rc geninfo_unexecuted_blocks=1 00:05:19.388 00:05:19.388 ' 00:05:19.388 11:48:17 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:19.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.388 --rc genhtml_branch_coverage=1 00:05:19.388 --rc genhtml_function_coverage=1 00:05:19.388 --rc genhtml_legend=1 00:05:19.388 --rc geninfo_all_blocks=1 00:05:19.388 --rc geninfo_unexecuted_blocks=1 00:05:19.388 00:05:19.388 ' 00:05:19.388 11:48:17 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:19.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.388 --rc genhtml_branch_coverage=1 00:05:19.388 --rc genhtml_function_coverage=1 00:05:19.388 --rc genhtml_legend=1 00:05:19.388 --rc geninfo_all_blocks=1 00:05:19.388 --rc geninfo_unexecuted_blocks=1 00:05:19.388 00:05:19.388 ' 00:05:19.388 11:48:17 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:19.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.388 --rc genhtml_branch_coverage=1 00:05:19.388 --rc genhtml_function_coverage=1 00:05:19.388 --rc genhtml_legend=1 00:05:19.388 --rc geninfo_all_blocks=1 00:05:19.388 --rc geninfo_unexecuted_blocks=1 00:05:19.388 00:05:19.388 ' 00:05:19.388 11:48:17 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:19.388 OK 00:05:19.388 11:48:17 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:19.388 00:05:19.388 real 0m0.290s 00:05:19.388 user 0m0.157s 00:05:19.388 sys 0m0.152s 00:05:19.388 11:48:17 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.388 11:48:17 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:19.388 ************************************ 00:05:19.388 END TEST rpc_client 00:05:19.388 ************************************ 00:05:19.388 11:48:17 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:19.388 11:48:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.388 11:48:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.388 11:48:17 -- common/autotest_common.sh@10 -- # set +x 00:05:19.648 ************************************ 00:05:19.648 START TEST json_config 00:05:19.648 ************************************ 00:05:19.648 11:48:17 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:19.648 11:48:17 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:19.648 11:48:17 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:19.648 11:48:17 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:19.648 11:48:17 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:19.648 11:48:17 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.648 11:48:17 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.648 11:48:17 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.648 11:48:17 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.648 11:48:17 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.648 11:48:17 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.648 11:48:17 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.648 11:48:17 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.648 11:48:17 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.648 11:48:17 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.648 11:48:17 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.648 11:48:17 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:19.648 11:48:17 json_config -- scripts/common.sh@345 -- # : 1 00:05:19.648 11:48:17 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.648 11:48:17 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.648 11:48:17 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:19.648 11:48:17 json_config -- scripts/common.sh@353 -- # local d=1 00:05:19.648 11:48:17 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.648 11:48:17 json_config -- scripts/common.sh@355 -- # echo 1 00:05:19.648 11:48:17 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.648 11:48:17 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:19.648 11:48:17 json_config -- scripts/common.sh@353 -- # local d=2 00:05:19.648 11:48:17 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.648 11:48:17 json_config -- scripts/common.sh@355 -- # echo 2 00:05:19.648 11:48:17 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.648 11:48:17 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.648 11:48:17 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.648 11:48:17 json_config -- scripts/common.sh@368 -- # return 0 00:05:19.648 11:48:17 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.648 11:48:17 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:19.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.648 --rc genhtml_branch_coverage=1 00:05:19.648 --rc genhtml_function_coverage=1 00:05:19.648 --rc genhtml_legend=1 00:05:19.648 --rc geninfo_all_blocks=1 00:05:19.648 --rc geninfo_unexecuted_blocks=1 00:05:19.648 00:05:19.648 ' 00:05:19.648 11:48:17 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:19.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.648 --rc genhtml_branch_coverage=1 00:05:19.648 --rc genhtml_function_coverage=1 00:05:19.648 --rc genhtml_legend=1 00:05:19.648 --rc geninfo_all_blocks=1 00:05:19.648 --rc geninfo_unexecuted_blocks=1 00:05:19.648 00:05:19.648 ' 00:05:19.648 11:48:17 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:19.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.648 --rc genhtml_branch_coverage=1 00:05:19.648 --rc genhtml_function_coverage=1 00:05:19.648 --rc genhtml_legend=1 00:05:19.648 --rc geninfo_all_blocks=1 00:05:19.648 --rc geninfo_unexecuted_blocks=1 00:05:19.648 00:05:19.648 ' 00:05:19.648 11:48:17 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:19.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.648 --rc genhtml_branch_coverage=1 00:05:19.648 --rc genhtml_function_coverage=1 00:05:19.648 --rc genhtml_legend=1 00:05:19.648 --rc geninfo_all_blocks=1 00:05:19.648 --rc geninfo_unexecuted_blocks=1 00:05:19.648 00:05:19.648 ' 00:05:19.648 11:48:17 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:19.648 11:48:17 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:19.648 11:48:17 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.648 11:48:17 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.648 11:48:17 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.648 11:48:17 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.648 11:48:17 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.648 11:48:17 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.648 11:48:17 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.648 11:48:17 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.648 11:48:17 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.648 11:48:17 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.648 11:48:17 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a676ec9a-0fd5-4c76-8bc0-12dbed7f747a 00:05:19.648 11:48:17 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a676ec9a-0fd5-4c76-8bc0-12dbed7f747a 00:05:19.648 11:48:17 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.648 11:48:17 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.648 11:48:17 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:19.648 11:48:17 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:19.648 11:48:17 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:19.648 11:48:17 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:19.648 11:48:17 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.648 11:48:17 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.648 11:48:17 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.648 11:48:17 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.648 11:48:17 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.649 11:48:17 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.649 11:48:17 json_config -- paths/export.sh@5 -- # export PATH 00:05:19.649 11:48:17 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.649 11:48:17 json_config -- nvmf/common.sh@51 -- # : 0 00:05:19.649 11:48:17 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:19.649 11:48:17 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:19.649 11:48:17 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:19.649 11:48:17 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.649 11:48:17 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.649 11:48:17 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:19.649 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:19.649 11:48:17 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:19.649 11:48:17 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:19.649 11:48:17 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:19.649 11:48:17 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:19.649 11:48:17 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:19.649 11:48:17 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:19.649 11:48:17 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:19.649 11:48:17 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:19.649 11:48:17 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:19.649 WARNING: No tests are enabled so not running JSON configuration tests 00:05:19.649 11:48:17 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:19.649 00:05:19.649 real 0m0.228s 00:05:19.649 user 0m0.141s 00:05:19.649 sys 0m0.093s 00:05:19.649 11:48:17 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.649 11:48:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.649 ************************************ 00:05:19.649 END TEST json_config 00:05:19.649 ************************************ 00:05:19.909 11:48:17 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:19.909 11:48:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.909 11:48:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.909 11:48:17 -- common/autotest_common.sh@10 -- # set +x 00:05:19.909 ************************************ 00:05:19.909 START TEST json_config_extra_key 00:05:19.909 ************************************ 00:05:19.909 11:48:17 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:19.909 11:48:17 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:19.909 11:48:17 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:19.909 11:48:17 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:19.909 11:48:17 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:19.909 11:48:17 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.909 11:48:17 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.909 11:48:17 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.909 11:48:17 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.909 11:48:17 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.909 11:48:17 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.909 11:48:17 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.909 11:48:17 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.909 11:48:17 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.909 11:48:17 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.909 11:48:17 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.909 11:48:17 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:19.909 11:48:17 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:19.909 11:48:17 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.909 11:48:17 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.909 11:48:17 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:19.909 11:48:17 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:19.909 11:48:17 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.909 11:48:17 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:19.909 11:48:17 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.909 11:48:17 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:19.909 11:48:17 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:19.909 11:48:17 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.909 11:48:17 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:19.909 11:48:17 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.909 11:48:17 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.909 11:48:17 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.909 11:48:17 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:19.909 11:48:17 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.909 11:48:17 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:19.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.909 --rc genhtml_branch_coverage=1 00:05:19.909 --rc genhtml_function_coverage=1 00:05:19.909 --rc genhtml_legend=1 00:05:19.909 --rc geninfo_all_blocks=1 00:05:19.909 --rc geninfo_unexecuted_blocks=1 00:05:19.909 00:05:19.909 ' 00:05:19.909 11:48:17 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:19.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.909 --rc genhtml_branch_coverage=1 00:05:19.909 --rc genhtml_function_coverage=1 00:05:19.909 --rc genhtml_legend=1 00:05:19.909 --rc geninfo_all_blocks=1 00:05:19.909 --rc geninfo_unexecuted_blocks=1 00:05:19.909 00:05:19.909 ' 00:05:19.909 11:48:17 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:19.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.909 --rc genhtml_branch_coverage=1 00:05:19.909 --rc genhtml_function_coverage=1 00:05:19.909 --rc genhtml_legend=1 00:05:19.909 --rc geninfo_all_blocks=1 00:05:19.909 --rc geninfo_unexecuted_blocks=1 00:05:19.909 00:05:19.909 ' 00:05:19.909 11:48:17 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:19.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.909 --rc genhtml_branch_coverage=1 00:05:19.909 --rc genhtml_function_coverage=1 00:05:19.909 --rc genhtml_legend=1 00:05:19.909 --rc geninfo_all_blocks=1 00:05:19.909 --rc geninfo_unexecuted_blocks=1 00:05:19.909 00:05:19.909 ' 00:05:19.910 11:48:17 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:19.910 11:48:17 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:19.910 11:48:17 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.910 11:48:17 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.910 11:48:17 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.910 11:48:17 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.910 11:48:17 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.910 11:48:17 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.910 11:48:17 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.910 11:48:17 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.910 11:48:17 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.910 11:48:17 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.910 11:48:17 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a676ec9a-0fd5-4c76-8bc0-12dbed7f747a 00:05:19.910 11:48:17 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a676ec9a-0fd5-4c76-8bc0-12dbed7f747a 00:05:19.910 11:48:17 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.910 11:48:17 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.910 11:48:17 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:19.910 11:48:17 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:19.910 11:48:17 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:19.910 11:48:17 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:19.910 11:48:17 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.910 11:48:17 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.910 11:48:17 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.910 11:48:17 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.910 11:48:17 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.910 11:48:17 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.910 11:48:17 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:19.910 11:48:17 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.910 11:48:17 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:19.910 11:48:17 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:19.910 11:48:17 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:19.910 11:48:17 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:19.910 11:48:17 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.910 11:48:17 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.910 11:48:17 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:19.910 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:19.910 11:48:17 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:19.910 11:48:17 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:19.910 11:48:17 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:20.175 11:48:17 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:20.175 11:48:17 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:20.175 11:48:17 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:20.175 11:48:17 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:20.175 11:48:17 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:20.175 11:48:17 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:20.175 11:48:17 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:20.175 11:48:17 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:20.175 11:48:17 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:20.175 11:48:17 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:20.175 11:48:17 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:20.175 INFO: launching applications... 00:05:20.175 11:48:17 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:20.175 11:48:17 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:20.175 11:48:17 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:20.175 11:48:17 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:20.175 11:48:17 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:20.175 11:48:17 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:20.175 11:48:17 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:20.175 11:48:17 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:20.175 11:48:17 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69450 00:05:20.175 11:48:17 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:20.175 Waiting for target to run... 00:05:20.175 11:48:17 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69450 /var/tmp/spdk_tgt.sock 00:05:20.175 11:48:17 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:20.175 11:48:17 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 69450 ']' 00:05:20.175 11:48:17 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:20.175 11:48:17 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.175 11:48:17 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:20.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:20.175 11:48:17 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.175 11:48:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:20.175 [2024-12-06 11:48:17.777053] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:20.175 [2024-12-06 11:48:17.777216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69450 ] 00:05:20.765 [2024-12-06 11:48:18.317070] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.765 [2024-12-06 11:48:18.348998] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.024 00:05:21.024 INFO: shutting down applications... 00:05:21.024 11:48:18 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:21.024 11:48:18 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:21.024 11:48:18 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:21.024 11:48:18 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:21.024 11:48:18 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:21.024 11:48:18 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:21.024 11:48:18 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:21.024 11:48:18 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69450 ]] 00:05:21.024 11:48:18 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69450 00:05:21.024 11:48:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:21.024 11:48:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.024 11:48:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69450 00:05:21.024 11:48:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.593 11:48:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.593 11:48:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.593 11:48:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69450 00:05:21.593 11:48:19 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:21.593 11:48:19 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:21.593 11:48:19 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:21.593 11:48:19 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:21.593 SPDK target shutdown done 00:05:21.593 11:48:19 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:21.593 Success 00:05:21.593 00:05:21.593 real 0m1.660s 00:05:21.593 user 0m1.206s 00:05:21.593 sys 0m0.649s 00:05:21.593 ************************************ 00:05:21.593 END TEST json_config_extra_key 00:05:21.593 ************************************ 00:05:21.593 11:48:19 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.593 11:48:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:21.593 11:48:19 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:21.593 11:48:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.593 11:48:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.593 11:48:19 -- common/autotest_common.sh@10 -- # set +x 00:05:21.593 ************************************ 00:05:21.593 START TEST alias_rpc 00:05:21.593 ************************************ 00:05:21.593 11:48:19 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:21.593 * Looking for test storage... 00:05:21.593 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:21.593 11:48:19 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:21.593 11:48:19 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:21.593 11:48:19 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:21.853 11:48:19 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:21.853 11:48:19 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.853 11:48:19 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.853 11:48:19 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.853 11:48:19 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.853 11:48:19 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.853 11:48:19 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.853 11:48:19 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.853 11:48:19 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.853 11:48:19 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.853 11:48:19 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.853 11:48:19 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.853 11:48:19 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:21.853 11:48:19 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:21.853 11:48:19 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.853 11:48:19 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.853 11:48:19 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:21.853 11:48:19 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:21.853 11:48:19 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.853 11:48:19 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:21.853 11:48:19 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.853 11:48:19 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:21.853 11:48:19 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:21.853 11:48:19 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.853 11:48:19 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:21.853 11:48:19 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.853 11:48:19 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.853 11:48:19 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.853 11:48:19 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:21.853 11:48:19 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.853 11:48:19 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:21.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.853 --rc genhtml_branch_coverage=1 00:05:21.853 --rc genhtml_function_coverage=1 00:05:21.853 --rc genhtml_legend=1 00:05:21.853 --rc geninfo_all_blocks=1 00:05:21.853 --rc geninfo_unexecuted_blocks=1 00:05:21.853 00:05:21.853 ' 00:05:21.853 11:48:19 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:21.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.853 --rc genhtml_branch_coverage=1 00:05:21.853 --rc genhtml_function_coverage=1 00:05:21.853 --rc genhtml_legend=1 00:05:21.853 --rc geninfo_all_blocks=1 00:05:21.853 --rc geninfo_unexecuted_blocks=1 00:05:21.853 00:05:21.853 ' 00:05:21.853 11:48:19 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:21.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.853 --rc genhtml_branch_coverage=1 00:05:21.853 --rc genhtml_function_coverage=1 00:05:21.853 --rc genhtml_legend=1 00:05:21.853 --rc geninfo_all_blocks=1 00:05:21.853 --rc geninfo_unexecuted_blocks=1 00:05:21.853 00:05:21.853 ' 00:05:21.853 11:48:19 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:21.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.853 --rc genhtml_branch_coverage=1 00:05:21.853 --rc genhtml_function_coverage=1 00:05:21.853 --rc genhtml_legend=1 00:05:21.853 --rc geninfo_all_blocks=1 00:05:21.853 --rc geninfo_unexecuted_blocks=1 00:05:21.853 00:05:21.853 ' 00:05:21.853 11:48:19 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:21.853 11:48:19 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69529 00:05:21.853 11:48:19 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:21.853 11:48:19 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69529 00:05:21.853 11:48:19 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 69529 ']' 00:05:21.854 11:48:19 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.854 11:48:19 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:21.854 11:48:19 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.854 11:48:19 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:21.854 11:48:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.854 [2024-12-06 11:48:19.492279] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:21.854 [2024-12-06 11:48:19.492515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69529 ] 00:05:22.117 [2024-12-06 11:48:19.636768] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.117 [2024-12-06 11:48:19.680444] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.687 11:48:20 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.687 11:48:20 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:22.687 11:48:20 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:22.947 11:48:20 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69529 00:05:22.947 11:48:20 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 69529 ']' 00:05:22.947 11:48:20 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 69529 00:05:22.947 11:48:20 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:22.947 11:48:20 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:22.947 11:48:20 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69529 00:05:22.947 killing process with pid 69529 00:05:22.947 11:48:20 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:22.947 11:48:20 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:22.947 11:48:20 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69529' 00:05:22.947 11:48:20 alias_rpc -- common/autotest_common.sh@969 -- # kill 69529 00:05:22.947 11:48:20 alias_rpc -- common/autotest_common.sh@974 -- # wait 69529 00:05:23.518 ************************************ 00:05:23.518 END TEST alias_rpc 00:05:23.518 ************************************ 00:05:23.518 00:05:23.518 real 0m1.805s 00:05:23.518 user 0m1.842s 00:05:23.518 sys 0m0.502s 00:05:23.518 11:48:20 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.518 11:48:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.518 11:48:21 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:23.518 11:48:21 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:23.518 11:48:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.518 11:48:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.518 11:48:21 -- common/autotest_common.sh@10 -- # set +x 00:05:23.518 ************************************ 00:05:23.518 START TEST spdkcli_tcp 00:05:23.518 ************************************ 00:05:23.518 11:48:21 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:23.518 * Looking for test storage... 00:05:23.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:23.518 11:48:21 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:23.518 11:48:21 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:23.518 11:48:21 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:23.518 11:48:21 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:23.518 11:48:21 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.518 11:48:21 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.518 11:48:21 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.518 11:48:21 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.518 11:48:21 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.518 11:48:21 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.518 11:48:21 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.518 11:48:21 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.518 11:48:21 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.518 11:48:21 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.518 11:48:21 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.518 11:48:21 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:23.518 11:48:21 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:23.518 11:48:21 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.518 11:48:21 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.518 11:48:21 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:23.518 11:48:21 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:23.518 11:48:21 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.518 11:48:21 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:23.518 11:48:21 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.518 11:48:21 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:23.518 11:48:21 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:23.518 11:48:21 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.518 11:48:21 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:23.518 11:48:21 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.779 11:48:21 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.779 11:48:21 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.779 11:48:21 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:23.779 11:48:21 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.779 11:48:21 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:23.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.779 --rc genhtml_branch_coverage=1 00:05:23.779 --rc genhtml_function_coverage=1 00:05:23.779 --rc genhtml_legend=1 00:05:23.779 --rc geninfo_all_blocks=1 00:05:23.779 --rc geninfo_unexecuted_blocks=1 00:05:23.779 00:05:23.779 ' 00:05:23.779 11:48:21 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:23.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.779 --rc genhtml_branch_coverage=1 00:05:23.779 --rc genhtml_function_coverage=1 00:05:23.779 --rc genhtml_legend=1 00:05:23.779 --rc geninfo_all_blocks=1 00:05:23.779 --rc geninfo_unexecuted_blocks=1 00:05:23.779 00:05:23.779 ' 00:05:23.779 11:48:21 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:23.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.779 --rc genhtml_branch_coverage=1 00:05:23.779 --rc genhtml_function_coverage=1 00:05:23.779 --rc genhtml_legend=1 00:05:23.779 --rc geninfo_all_blocks=1 00:05:23.779 --rc geninfo_unexecuted_blocks=1 00:05:23.779 00:05:23.779 ' 00:05:23.779 11:48:21 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:23.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.779 --rc genhtml_branch_coverage=1 00:05:23.779 --rc genhtml_function_coverage=1 00:05:23.779 --rc genhtml_legend=1 00:05:23.779 --rc geninfo_all_blocks=1 00:05:23.779 --rc geninfo_unexecuted_blocks=1 00:05:23.779 00:05:23.779 ' 00:05:23.779 11:48:21 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:23.779 11:48:21 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:23.779 11:48:21 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:23.779 11:48:21 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:23.779 11:48:21 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:23.779 11:48:21 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:23.779 11:48:21 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:23.779 11:48:21 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:23.779 11:48:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:23.779 11:48:21 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69614 00:05:23.779 11:48:21 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:23.779 11:48:21 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 69614 00:05:23.779 11:48:21 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 69614 ']' 00:05:23.779 11:48:21 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.779 11:48:21 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.779 11:48:21 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.779 11:48:21 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.779 11:48:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:23.779 [2024-12-06 11:48:21.392425] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:23.779 [2024-12-06 11:48:21.392664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69614 ] 00:05:24.038 [2024-12-06 11:48:21.542363] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.038 [2024-12-06 11:48:21.590838] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.038 [2024-12-06 11:48:21.590939] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.605 11:48:22 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.605 11:48:22 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:24.605 11:48:22 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:24.605 11:48:22 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=69620 00:05:24.605 11:48:22 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:24.863 [ 00:05:24.863 "bdev_malloc_delete", 00:05:24.864 "bdev_malloc_create", 00:05:24.864 "bdev_null_resize", 00:05:24.864 "bdev_null_delete", 00:05:24.864 "bdev_null_create", 00:05:24.864 "bdev_nvme_cuse_unregister", 00:05:24.864 "bdev_nvme_cuse_register", 00:05:24.864 "bdev_opal_new_user", 00:05:24.864 "bdev_opal_set_lock_state", 00:05:24.864 "bdev_opal_delete", 00:05:24.864 "bdev_opal_get_info", 00:05:24.864 "bdev_opal_create", 00:05:24.864 "bdev_nvme_opal_revert", 00:05:24.864 "bdev_nvme_opal_init", 00:05:24.864 "bdev_nvme_send_cmd", 00:05:24.864 "bdev_nvme_set_keys", 00:05:24.864 "bdev_nvme_get_path_iostat", 00:05:24.864 "bdev_nvme_get_mdns_discovery_info", 00:05:24.864 "bdev_nvme_stop_mdns_discovery", 00:05:24.864 "bdev_nvme_start_mdns_discovery", 00:05:24.864 "bdev_nvme_set_multipath_policy", 00:05:24.864 "bdev_nvme_set_preferred_path", 00:05:24.864 "bdev_nvme_get_io_paths", 00:05:24.864 "bdev_nvme_remove_error_injection", 00:05:24.864 "bdev_nvme_add_error_injection", 00:05:24.864 "bdev_nvme_get_discovery_info", 00:05:24.864 "bdev_nvme_stop_discovery", 00:05:24.864 "bdev_nvme_start_discovery", 00:05:24.864 "bdev_nvme_get_controller_health_info", 00:05:24.864 "bdev_nvme_disable_controller", 00:05:24.864 "bdev_nvme_enable_controller", 00:05:24.864 "bdev_nvme_reset_controller", 00:05:24.864 "bdev_nvme_get_transport_statistics", 00:05:24.864 "bdev_nvme_apply_firmware", 00:05:24.864 "bdev_nvme_detach_controller", 00:05:24.864 "bdev_nvme_get_controllers", 00:05:24.864 "bdev_nvme_attach_controller", 00:05:24.864 "bdev_nvme_set_hotplug", 00:05:24.864 "bdev_nvme_set_options", 00:05:24.864 "bdev_passthru_delete", 00:05:24.864 "bdev_passthru_create", 00:05:24.864 "bdev_lvol_set_parent_bdev", 00:05:24.864 "bdev_lvol_set_parent", 00:05:24.864 "bdev_lvol_check_shallow_copy", 00:05:24.864 "bdev_lvol_start_shallow_copy", 00:05:24.864 "bdev_lvol_grow_lvstore", 00:05:24.864 "bdev_lvol_get_lvols", 00:05:24.864 "bdev_lvol_get_lvstores", 00:05:24.864 "bdev_lvol_delete", 00:05:24.864 "bdev_lvol_set_read_only", 00:05:24.864 "bdev_lvol_resize", 00:05:24.864 "bdev_lvol_decouple_parent", 00:05:24.864 "bdev_lvol_inflate", 00:05:24.864 "bdev_lvol_rename", 00:05:24.864 "bdev_lvol_clone_bdev", 00:05:24.864 "bdev_lvol_clone", 00:05:24.864 "bdev_lvol_snapshot", 00:05:24.864 "bdev_lvol_create", 00:05:24.864 "bdev_lvol_delete_lvstore", 00:05:24.864 "bdev_lvol_rename_lvstore", 00:05:24.864 "bdev_lvol_create_lvstore", 00:05:24.864 "bdev_raid_set_options", 00:05:24.864 "bdev_raid_remove_base_bdev", 00:05:24.864 "bdev_raid_add_base_bdev", 00:05:24.864 "bdev_raid_delete", 00:05:24.864 "bdev_raid_create", 00:05:24.864 "bdev_raid_get_bdevs", 00:05:24.864 "bdev_error_inject_error", 00:05:24.864 "bdev_error_delete", 00:05:24.864 "bdev_error_create", 00:05:24.864 "bdev_split_delete", 00:05:24.864 "bdev_split_create", 00:05:24.864 "bdev_delay_delete", 00:05:24.864 "bdev_delay_create", 00:05:24.864 "bdev_delay_update_latency", 00:05:24.864 "bdev_zone_block_delete", 00:05:24.864 "bdev_zone_block_create", 00:05:24.864 "blobfs_create", 00:05:24.864 "blobfs_detect", 00:05:24.864 "blobfs_set_cache_size", 00:05:24.864 "bdev_aio_delete", 00:05:24.864 "bdev_aio_rescan", 00:05:24.864 "bdev_aio_create", 00:05:24.864 "bdev_ftl_set_property", 00:05:24.864 "bdev_ftl_get_properties", 00:05:24.864 "bdev_ftl_get_stats", 00:05:24.864 "bdev_ftl_unmap", 00:05:24.864 "bdev_ftl_unload", 00:05:24.864 "bdev_ftl_delete", 00:05:24.864 "bdev_ftl_load", 00:05:24.864 "bdev_ftl_create", 00:05:24.864 "bdev_virtio_attach_controller", 00:05:24.864 "bdev_virtio_scsi_get_devices", 00:05:24.864 "bdev_virtio_detach_controller", 00:05:24.864 "bdev_virtio_blk_set_hotplug", 00:05:24.864 "bdev_iscsi_delete", 00:05:24.864 "bdev_iscsi_create", 00:05:24.864 "bdev_iscsi_set_options", 00:05:24.864 "accel_error_inject_error", 00:05:24.864 "ioat_scan_accel_module", 00:05:24.864 "dsa_scan_accel_module", 00:05:24.864 "iaa_scan_accel_module", 00:05:24.864 "keyring_file_remove_key", 00:05:24.864 "keyring_file_add_key", 00:05:24.864 "keyring_linux_set_options", 00:05:24.864 "fsdev_aio_delete", 00:05:24.864 "fsdev_aio_create", 00:05:24.864 "iscsi_get_histogram", 00:05:24.864 "iscsi_enable_histogram", 00:05:24.864 "iscsi_set_options", 00:05:24.864 "iscsi_get_auth_groups", 00:05:24.864 "iscsi_auth_group_remove_secret", 00:05:24.864 "iscsi_auth_group_add_secret", 00:05:24.864 "iscsi_delete_auth_group", 00:05:24.864 "iscsi_create_auth_group", 00:05:24.864 "iscsi_set_discovery_auth", 00:05:24.864 "iscsi_get_options", 00:05:24.864 "iscsi_target_node_request_logout", 00:05:24.864 "iscsi_target_node_set_redirect", 00:05:24.864 "iscsi_target_node_set_auth", 00:05:24.864 "iscsi_target_node_add_lun", 00:05:24.864 "iscsi_get_stats", 00:05:24.864 "iscsi_get_connections", 00:05:24.864 "iscsi_portal_group_set_auth", 00:05:24.864 "iscsi_start_portal_group", 00:05:24.864 "iscsi_delete_portal_group", 00:05:24.864 "iscsi_create_portal_group", 00:05:24.864 "iscsi_get_portal_groups", 00:05:24.864 "iscsi_delete_target_node", 00:05:24.864 "iscsi_target_node_remove_pg_ig_maps", 00:05:24.864 "iscsi_target_node_add_pg_ig_maps", 00:05:24.864 "iscsi_create_target_node", 00:05:24.864 "iscsi_get_target_nodes", 00:05:24.864 "iscsi_delete_initiator_group", 00:05:24.864 "iscsi_initiator_group_remove_initiators", 00:05:24.864 "iscsi_initiator_group_add_initiators", 00:05:24.864 "iscsi_create_initiator_group", 00:05:24.864 "iscsi_get_initiator_groups", 00:05:24.864 "nvmf_set_crdt", 00:05:24.864 "nvmf_set_config", 00:05:24.864 "nvmf_set_max_subsystems", 00:05:24.864 "nvmf_stop_mdns_prr", 00:05:24.864 "nvmf_publish_mdns_prr", 00:05:24.864 "nvmf_subsystem_get_listeners", 00:05:24.864 "nvmf_subsystem_get_qpairs", 00:05:24.864 "nvmf_subsystem_get_controllers", 00:05:24.864 "nvmf_get_stats", 00:05:24.864 "nvmf_get_transports", 00:05:24.864 "nvmf_create_transport", 00:05:24.864 "nvmf_get_targets", 00:05:24.864 "nvmf_delete_target", 00:05:24.864 "nvmf_create_target", 00:05:24.864 "nvmf_subsystem_allow_any_host", 00:05:24.864 "nvmf_subsystem_set_keys", 00:05:24.864 "nvmf_subsystem_remove_host", 00:05:24.864 "nvmf_subsystem_add_host", 00:05:24.864 "nvmf_ns_remove_host", 00:05:24.864 "nvmf_ns_add_host", 00:05:24.864 "nvmf_subsystem_remove_ns", 00:05:24.864 "nvmf_subsystem_set_ns_ana_group", 00:05:24.864 "nvmf_subsystem_add_ns", 00:05:24.864 "nvmf_subsystem_listener_set_ana_state", 00:05:24.864 "nvmf_discovery_get_referrals", 00:05:24.864 "nvmf_discovery_remove_referral", 00:05:24.864 "nvmf_discovery_add_referral", 00:05:24.864 "nvmf_subsystem_remove_listener", 00:05:24.864 "nvmf_subsystem_add_listener", 00:05:24.864 "nvmf_delete_subsystem", 00:05:24.864 "nvmf_create_subsystem", 00:05:24.864 "nvmf_get_subsystems", 00:05:24.864 "env_dpdk_get_mem_stats", 00:05:24.864 "nbd_get_disks", 00:05:24.864 "nbd_stop_disk", 00:05:24.864 "nbd_start_disk", 00:05:24.864 "ublk_recover_disk", 00:05:24.864 "ublk_get_disks", 00:05:24.864 "ublk_stop_disk", 00:05:24.864 "ublk_start_disk", 00:05:24.864 "ublk_destroy_target", 00:05:24.864 "ublk_create_target", 00:05:24.864 "virtio_blk_create_transport", 00:05:24.864 "virtio_blk_get_transports", 00:05:24.864 "vhost_controller_set_coalescing", 00:05:24.864 "vhost_get_controllers", 00:05:24.864 "vhost_delete_controller", 00:05:24.864 "vhost_create_blk_controller", 00:05:24.864 "vhost_scsi_controller_remove_target", 00:05:24.864 "vhost_scsi_controller_add_target", 00:05:24.864 "vhost_start_scsi_controller", 00:05:24.864 "vhost_create_scsi_controller", 00:05:24.864 "thread_set_cpumask", 00:05:24.864 "scheduler_set_options", 00:05:24.864 "framework_get_governor", 00:05:24.864 "framework_get_scheduler", 00:05:24.865 "framework_set_scheduler", 00:05:24.865 "framework_get_reactors", 00:05:24.865 "thread_get_io_channels", 00:05:24.865 "thread_get_pollers", 00:05:24.865 "thread_get_stats", 00:05:24.865 "framework_monitor_context_switch", 00:05:24.865 "spdk_kill_instance", 00:05:24.865 "log_enable_timestamps", 00:05:24.865 "log_get_flags", 00:05:24.865 "log_clear_flag", 00:05:24.865 "log_set_flag", 00:05:24.865 "log_get_level", 00:05:24.865 "log_set_level", 00:05:24.865 "log_get_print_level", 00:05:24.865 "log_set_print_level", 00:05:24.865 "framework_enable_cpumask_locks", 00:05:24.865 "framework_disable_cpumask_locks", 00:05:24.865 "framework_wait_init", 00:05:24.865 "framework_start_init", 00:05:24.865 "scsi_get_devices", 00:05:24.865 "bdev_get_histogram", 00:05:24.865 "bdev_enable_histogram", 00:05:24.865 "bdev_set_qos_limit", 00:05:24.865 "bdev_set_qd_sampling_period", 00:05:24.865 "bdev_get_bdevs", 00:05:24.865 "bdev_reset_iostat", 00:05:24.865 "bdev_get_iostat", 00:05:24.865 "bdev_examine", 00:05:24.865 "bdev_wait_for_examine", 00:05:24.865 "bdev_set_options", 00:05:24.865 "accel_get_stats", 00:05:24.865 "accel_set_options", 00:05:24.865 "accel_set_driver", 00:05:24.865 "accel_crypto_key_destroy", 00:05:24.865 "accel_crypto_keys_get", 00:05:24.865 "accel_crypto_key_create", 00:05:24.865 "accel_assign_opc", 00:05:24.865 "accel_get_module_info", 00:05:24.865 "accel_get_opc_assignments", 00:05:24.865 "vmd_rescan", 00:05:24.865 "vmd_remove_device", 00:05:24.865 "vmd_enable", 00:05:24.865 "sock_get_default_impl", 00:05:24.865 "sock_set_default_impl", 00:05:24.865 "sock_impl_set_options", 00:05:24.865 "sock_impl_get_options", 00:05:24.865 "iobuf_get_stats", 00:05:24.865 "iobuf_set_options", 00:05:24.865 "keyring_get_keys", 00:05:24.865 "framework_get_pci_devices", 00:05:24.865 "framework_get_config", 00:05:24.865 "framework_get_subsystems", 00:05:24.865 "fsdev_set_opts", 00:05:24.865 "fsdev_get_opts", 00:05:24.865 "trace_get_info", 00:05:24.865 "trace_get_tpoint_group_mask", 00:05:24.865 "trace_disable_tpoint_group", 00:05:24.865 "trace_enable_tpoint_group", 00:05:24.865 "trace_clear_tpoint_mask", 00:05:24.865 "trace_set_tpoint_mask", 00:05:24.865 "notify_get_notifications", 00:05:24.865 "notify_get_types", 00:05:24.865 "spdk_get_version", 00:05:24.865 "rpc_get_methods" 00:05:24.865 ] 00:05:24.865 11:48:22 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:24.865 11:48:22 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:24.865 11:48:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.865 11:48:22 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:24.865 11:48:22 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 69614 00:05:24.865 11:48:22 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 69614 ']' 00:05:24.865 11:48:22 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 69614 00:05:24.865 11:48:22 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:24.865 11:48:22 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:24.865 11:48:22 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69614 00:05:24.865 killing process with pid 69614 00:05:24.865 11:48:22 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:24.865 11:48:22 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:24.865 11:48:22 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69614' 00:05:24.865 11:48:22 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 69614 00:05:24.865 11:48:22 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 69614 00:05:25.123 ************************************ 00:05:25.123 END TEST spdkcli_tcp 00:05:25.123 ************************************ 00:05:25.123 00:05:25.123 real 0m1.817s 00:05:25.123 user 0m2.917s 00:05:25.123 sys 0m0.602s 00:05:25.123 11:48:22 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.123 11:48:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:25.383 11:48:22 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:25.383 11:48:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.383 11:48:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.383 11:48:22 -- common/autotest_common.sh@10 -- # set +x 00:05:25.383 ************************************ 00:05:25.383 START TEST dpdk_mem_utility 00:05:25.383 ************************************ 00:05:25.383 11:48:22 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:25.383 * Looking for test storage... 00:05:25.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:25.383 11:48:23 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:25.383 11:48:23 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:25.383 11:48:23 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:25.383 11:48:23 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:25.383 11:48:23 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.383 11:48:23 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.383 11:48:23 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.383 11:48:23 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.383 11:48:23 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.383 11:48:23 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.383 11:48:23 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.383 11:48:23 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.383 11:48:23 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.383 11:48:23 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.383 11:48:23 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.383 11:48:23 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:25.383 11:48:23 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:25.383 11:48:23 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.383 11:48:23 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.641 11:48:23 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:25.641 11:48:23 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:25.641 11:48:23 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.641 11:48:23 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:25.641 11:48:23 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.641 11:48:23 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:25.641 11:48:23 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:25.641 11:48:23 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.641 11:48:23 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:25.641 11:48:23 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.641 11:48:23 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.641 11:48:23 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.642 11:48:23 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:25.642 11:48:23 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.642 11:48:23 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:25.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.642 --rc genhtml_branch_coverage=1 00:05:25.642 --rc genhtml_function_coverage=1 00:05:25.642 --rc genhtml_legend=1 00:05:25.642 --rc geninfo_all_blocks=1 00:05:25.642 --rc geninfo_unexecuted_blocks=1 00:05:25.642 00:05:25.642 ' 00:05:25.642 11:48:23 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:25.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.642 --rc genhtml_branch_coverage=1 00:05:25.642 --rc genhtml_function_coverage=1 00:05:25.642 --rc genhtml_legend=1 00:05:25.642 --rc geninfo_all_blocks=1 00:05:25.642 --rc geninfo_unexecuted_blocks=1 00:05:25.642 00:05:25.642 ' 00:05:25.642 11:48:23 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:25.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.642 --rc genhtml_branch_coverage=1 00:05:25.642 --rc genhtml_function_coverage=1 00:05:25.642 --rc genhtml_legend=1 00:05:25.642 --rc geninfo_all_blocks=1 00:05:25.642 --rc geninfo_unexecuted_blocks=1 00:05:25.642 00:05:25.642 ' 00:05:25.642 11:48:23 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:25.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.642 --rc genhtml_branch_coverage=1 00:05:25.642 --rc genhtml_function_coverage=1 00:05:25.642 --rc genhtml_legend=1 00:05:25.642 --rc geninfo_all_blocks=1 00:05:25.642 --rc geninfo_unexecuted_blocks=1 00:05:25.642 00:05:25.642 ' 00:05:25.642 11:48:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:25.642 11:48:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=69703 00:05:25.642 11:48:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:25.642 11:48:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 69703 00:05:25.642 11:48:23 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 69703 ']' 00:05:25.642 11:48:23 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.642 11:48:23 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.642 11:48:23 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.642 11:48:23 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.642 11:48:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:25.642 [2024-12-06 11:48:23.254828] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:25.642 [2024-12-06 11:48:23.254954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69703 ] 00:05:25.900 [2024-12-06 11:48:23.402169] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.900 [2024-12-06 11:48:23.446507] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.468 11:48:24 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:26.468 11:48:24 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:26.468 11:48:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:26.468 11:48:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:26.468 11:48:24 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.468 11:48:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:26.468 { 00:05:26.468 "filename": "/tmp/spdk_mem_dump.txt" 00:05:26.468 } 00:05:26.468 11:48:24 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.468 11:48:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:26.468 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:26.468 1 heaps totaling size 860.000000 MiB 00:05:26.468 size: 860.000000 MiB heap id: 0 00:05:26.468 end heaps---------- 00:05:26.468 9 mempools totaling size 642.649841 MiB 00:05:26.468 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:26.468 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:26.468 size: 92.545471 MiB name: bdev_io_69703 00:05:26.468 size: 51.011292 MiB name: evtpool_69703 00:05:26.468 size: 50.003479 MiB name: msgpool_69703 00:05:26.468 size: 36.509338 MiB name: fsdev_io_69703 00:05:26.468 size: 21.763794 MiB name: PDU_Pool 00:05:26.468 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:26.468 size: 0.026123 MiB name: Session_Pool 00:05:26.468 end mempools------- 00:05:26.468 6 memzones totaling size 4.142822 MiB 00:05:26.468 size: 1.000366 MiB name: RG_ring_0_69703 00:05:26.468 size: 1.000366 MiB name: RG_ring_1_69703 00:05:26.468 size: 1.000366 MiB name: RG_ring_4_69703 00:05:26.468 size: 1.000366 MiB name: RG_ring_5_69703 00:05:26.468 size: 0.125366 MiB name: RG_ring_2_69703 00:05:26.468 size: 0.015991 MiB name: RG_ring_3_69703 00:05:26.468 end memzones------- 00:05:26.468 11:48:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:26.468 heap id: 0 total size: 860.000000 MiB number of busy elements: 307 number of free elements: 16 00:05:26.468 list of free elements. size: 13.936523 MiB 00:05:26.468 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:26.468 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:26.468 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:26.468 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:26.468 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:26.468 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:26.468 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:26.468 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:26.468 element at address: 0x200000200000 with size: 0.835022 MiB 00:05:26.468 element at address: 0x20001d800000 with size: 0.566956 MiB 00:05:26.468 element at address: 0x200003e00000 with size: 0.489563 MiB 00:05:26.468 element at address: 0x20000d800000 with size: 0.489441 MiB 00:05:26.468 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:26.468 element at address: 0x200007000000 with size: 0.480286 MiB 00:05:26.468 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:05:26.468 element at address: 0x200003a00000 with size: 0.352478 MiB 00:05:26.468 list of standard malloc elements. size: 199.266785 MiB 00:05:26.468 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:26.468 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:26.468 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:26.468 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:26.468 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:26.468 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:26.468 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:26.468 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:26.468 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:26.468 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:26.468 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:26.468 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:26.468 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:26.468 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:26.468 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:26.468 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:26.468 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:26.468 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:26.468 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:26.468 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:26.468 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:26.468 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:26.468 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:26.468 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:26.468 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:26.468 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:26.469 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:26.469 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:26.469 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:26.469 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:26.469 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:26.469 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:26.469 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:26.469 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:26.469 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:26.469 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:26.469 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:26.469 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:26.469 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:26.469 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:26.469 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:26.469 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:26.469 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:26.469 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:26.469 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:26.469 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003a5a3c0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003a5e880 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003a7eb40 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003a7ec00 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003a7ecc0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003aff880 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20000707af40 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20000707b000 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20000707b180 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20000707b240 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20000707b300 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20000707b480 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20000707b540 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:26.469 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:26.469 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d891240 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d891300 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d8913c0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d891480 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d891540 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d891600 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d891780 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d891840 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d891900 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d892080 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d892140 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d892200 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d892380 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d892440 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d892500 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d892680 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d892740 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d892800 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d892980 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d893040 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d893100 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d893280 with size: 0.000183 MiB 00:05:26.469 element at address: 0x20001d893340 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d893400 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d893580 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d893640 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d893700 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d893880 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d893940 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d894000 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d894180 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d894240 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d894300 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d894480 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d894540 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d894600 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d894780 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d894840 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d894900 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d895080 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d895140 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d895200 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:26.470 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:26.470 list of memzone associated elements. size: 646.796692 MiB 00:05:26.470 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:26.470 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:26.470 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:26.470 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:26.470 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:26.470 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_69703_0 00:05:26.470 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:26.470 associated memzone info: size: 48.002930 MiB name: MP_evtpool_69703_0 00:05:26.470 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:26.470 associated memzone info: size: 48.002930 MiB name: MP_msgpool_69703_0 00:05:26.470 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:26.470 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_69703_0 00:05:26.470 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:26.470 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:26.470 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:26.470 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:26.470 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:26.470 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_69703 00:05:26.470 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:26.470 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_69703 00:05:26.470 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:26.470 associated memzone info: size: 1.007996 MiB name: MP_evtpool_69703 00:05:26.470 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:26.470 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:26.470 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:26.470 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:26.471 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:26.471 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:26.471 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:26.471 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:26.471 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:26.471 associated memzone info: size: 1.000366 MiB name: RG_ring_0_69703 00:05:26.471 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:26.471 associated memzone info: size: 1.000366 MiB name: RG_ring_1_69703 00:05:26.471 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:26.471 associated memzone info: size: 1.000366 MiB name: RG_ring_4_69703 00:05:26.471 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:26.471 associated memzone info: size: 1.000366 MiB name: RG_ring_5_69703 00:05:26.471 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:05:26.471 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_69703 00:05:26.471 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:05:26.471 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_69703 00:05:26.471 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:26.471 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:26.471 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:26.471 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:26.471 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:26.471 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:26.471 element at address: 0x200003a5e940 with size: 0.125488 MiB 00:05:26.471 associated memzone info: size: 0.125366 MiB name: RG_ring_2_69703 00:05:26.471 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:26.471 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:26.471 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:05:26.471 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:26.471 element at address: 0x200003a5a680 with size: 0.016113 MiB 00:05:26.471 associated memzone info: size: 0.015991 MiB name: RG_ring_3_69703 00:05:26.471 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:05:26.471 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:26.471 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:26.471 associated memzone info: size: 0.000183 MiB name: MP_msgpool_69703 00:05:26.471 element at address: 0x200003aff940 with size: 0.000305 MiB 00:05:26.471 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_69703 00:05:26.471 element at address: 0x200003a5a480 with size: 0.000305 MiB 00:05:26.471 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_69703 00:05:26.471 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:05:26.471 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:26.471 11:48:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:26.471 11:48:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 69703 00:05:26.471 11:48:24 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 69703 ']' 00:05:26.471 11:48:24 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 69703 00:05:26.471 11:48:24 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:26.471 11:48:24 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:26.471 11:48:24 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69703 00:05:26.471 11:48:24 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:26.471 11:48:24 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:26.471 11:48:24 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69703' 00:05:26.471 killing process with pid 69703 00:05:26.471 11:48:24 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 69703 00:05:26.729 11:48:24 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 69703 00:05:26.987 00:05:26.987 real 0m1.667s 00:05:26.987 user 0m1.632s 00:05:26.987 sys 0m0.472s 00:05:26.987 11:48:24 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.987 11:48:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:26.987 ************************************ 00:05:26.987 END TEST dpdk_mem_utility 00:05:26.987 ************************************ 00:05:26.987 11:48:24 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:26.987 11:48:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.987 11:48:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.987 11:48:24 -- common/autotest_common.sh@10 -- # set +x 00:05:26.987 ************************************ 00:05:26.987 START TEST event 00:05:26.987 ************************************ 00:05:26.987 11:48:24 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:27.244 * Looking for test storage... 00:05:27.244 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:27.244 11:48:24 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:27.244 11:48:24 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:27.244 11:48:24 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:27.244 11:48:24 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:27.244 11:48:24 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.245 11:48:24 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.245 11:48:24 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.245 11:48:24 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.245 11:48:24 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.245 11:48:24 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.245 11:48:24 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.245 11:48:24 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.245 11:48:24 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.245 11:48:24 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.245 11:48:24 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.245 11:48:24 event -- scripts/common.sh@344 -- # case "$op" in 00:05:27.245 11:48:24 event -- scripts/common.sh@345 -- # : 1 00:05:27.245 11:48:24 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.245 11:48:24 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.245 11:48:24 event -- scripts/common.sh@365 -- # decimal 1 00:05:27.245 11:48:24 event -- scripts/common.sh@353 -- # local d=1 00:05:27.245 11:48:24 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.245 11:48:24 event -- scripts/common.sh@355 -- # echo 1 00:05:27.245 11:48:24 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.245 11:48:24 event -- scripts/common.sh@366 -- # decimal 2 00:05:27.245 11:48:24 event -- scripts/common.sh@353 -- # local d=2 00:05:27.245 11:48:24 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.245 11:48:24 event -- scripts/common.sh@355 -- # echo 2 00:05:27.245 11:48:24 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.245 11:48:24 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.245 11:48:24 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.245 11:48:24 event -- scripts/common.sh@368 -- # return 0 00:05:27.245 11:48:24 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.245 11:48:24 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:27.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.245 --rc genhtml_branch_coverage=1 00:05:27.245 --rc genhtml_function_coverage=1 00:05:27.245 --rc genhtml_legend=1 00:05:27.245 --rc geninfo_all_blocks=1 00:05:27.245 --rc geninfo_unexecuted_blocks=1 00:05:27.245 00:05:27.245 ' 00:05:27.245 11:48:24 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:27.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.245 --rc genhtml_branch_coverage=1 00:05:27.245 --rc genhtml_function_coverage=1 00:05:27.245 --rc genhtml_legend=1 00:05:27.245 --rc geninfo_all_blocks=1 00:05:27.245 --rc geninfo_unexecuted_blocks=1 00:05:27.245 00:05:27.245 ' 00:05:27.245 11:48:24 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:27.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.245 --rc genhtml_branch_coverage=1 00:05:27.245 --rc genhtml_function_coverage=1 00:05:27.245 --rc genhtml_legend=1 00:05:27.245 --rc geninfo_all_blocks=1 00:05:27.245 --rc geninfo_unexecuted_blocks=1 00:05:27.245 00:05:27.245 ' 00:05:27.245 11:48:24 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:27.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.245 --rc genhtml_branch_coverage=1 00:05:27.245 --rc genhtml_function_coverage=1 00:05:27.245 --rc genhtml_legend=1 00:05:27.245 --rc geninfo_all_blocks=1 00:05:27.245 --rc geninfo_unexecuted_blocks=1 00:05:27.245 00:05:27.245 ' 00:05:27.245 11:48:24 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:27.245 11:48:24 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:27.245 11:48:24 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:27.245 11:48:24 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:27.245 11:48:24 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.245 11:48:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.245 ************************************ 00:05:27.245 START TEST event_perf 00:05:27.245 ************************************ 00:05:27.245 11:48:24 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:27.245 Running I/O for 1 seconds...[2024-12-06 11:48:24.953583] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:27.245 [2024-12-06 11:48:24.953723] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69789 ] 00:05:27.514 [2024-12-06 11:48:25.100327] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:27.514 [2024-12-06 11:48:25.147733] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.514 Running I/O for 1 seconds...[2024-12-06 11:48:25.148007] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.514 [2024-12-06 11:48:25.148006] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.514 [2024-12-06 11:48:25.148161] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.891 00:05:28.892 lcore 0: 102497 00:05:28.892 lcore 1: 102494 00:05:28.892 lcore 2: 102496 00:05:28.892 lcore 3: 102495 00:05:28.892 done. 00:05:28.892 00:05:28.892 real 0m1.333s 00:05:28.892 user 0m4.099s 00:05:28.892 sys 0m0.112s 00:05:28.892 ************************************ 00:05:28.892 END TEST event_perf 00:05:28.892 ************************************ 00:05:28.892 11:48:26 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.892 11:48:26 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:28.892 11:48:26 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:28.892 11:48:26 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:28.892 11:48:26 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.892 11:48:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.892 ************************************ 00:05:28.892 START TEST event_reactor 00:05:28.892 ************************************ 00:05:28.892 11:48:26 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:28.892 [2024-12-06 11:48:26.356931] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:28.892 [2024-12-06 11:48:26.357056] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69829 ] 00:05:28.892 [2024-12-06 11:48:26.499340] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.892 [2024-12-06 11:48:26.542803] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.269 test_start 00:05:30.269 oneshot 00:05:30.269 tick 100 00:05:30.269 tick 100 00:05:30.269 tick 250 00:05:30.269 tick 100 00:05:30.269 tick 100 00:05:30.269 tick 100 00:05:30.269 tick 250 00:05:30.269 tick 500 00:05:30.269 tick 100 00:05:30.269 tick 100 00:05:30.269 tick 250 00:05:30.269 tick 100 00:05:30.269 tick 100 00:05:30.269 test_end 00:05:30.269 00:05:30.269 real 0m1.317s 00:05:30.269 user 0m1.126s 00:05:30.269 sys 0m0.084s 00:05:30.269 11:48:27 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.269 11:48:27 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:30.269 ************************************ 00:05:30.269 END TEST event_reactor 00:05:30.269 ************************************ 00:05:30.269 11:48:27 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:30.269 11:48:27 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:30.269 11:48:27 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.269 11:48:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.269 ************************************ 00:05:30.269 START TEST event_reactor_perf 00:05:30.269 ************************************ 00:05:30.269 11:48:27 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:30.269 [2024-12-06 11:48:27.741766] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:30.269 [2024-12-06 11:48:27.741954] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69865 ] 00:05:30.269 [2024-12-06 11:48:27.884774] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.269 [2024-12-06 11:48:27.928765] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.650 test_start 00:05:31.650 test_end 00:05:31.650 Performance: 408810 events per second 00:05:31.650 00:05:31.650 real 0m1.317s 00:05:31.650 user 0m1.129s 00:05:31.650 sys 0m0.081s 00:05:31.650 11:48:29 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.650 ************************************ 00:05:31.650 END TEST event_reactor_perf 00:05:31.650 ************************************ 00:05:31.650 11:48:29 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:31.650 11:48:29 event -- event/event.sh@49 -- # uname -s 00:05:31.650 11:48:29 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:31.650 11:48:29 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:31.650 11:48:29 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.650 11:48:29 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.650 11:48:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.650 ************************************ 00:05:31.650 START TEST event_scheduler 00:05:31.650 ************************************ 00:05:31.650 11:48:29 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:31.650 * Looking for test storage... 00:05:31.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:31.650 11:48:29 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:31.650 11:48:29 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:31.650 11:48:29 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:31.650 11:48:29 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:31.650 11:48:29 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.650 11:48:29 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.650 11:48:29 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.650 11:48:29 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.650 11:48:29 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.650 11:48:29 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.650 11:48:29 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.650 11:48:29 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.650 11:48:29 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.650 11:48:29 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.650 11:48:29 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.650 11:48:29 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:31.650 11:48:29 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:31.650 11:48:29 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.650 11:48:29 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.650 11:48:29 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:31.650 11:48:29 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:31.650 11:48:29 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.650 11:48:29 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:31.650 11:48:29 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.650 11:48:29 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:31.650 11:48:29 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:31.650 11:48:29 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.650 11:48:29 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:31.650 11:48:29 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.650 11:48:29 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.650 11:48:29 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.650 11:48:29 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:31.650 11:48:29 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.650 11:48:29 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:31.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.650 --rc genhtml_branch_coverage=1 00:05:31.650 --rc genhtml_function_coverage=1 00:05:31.650 --rc genhtml_legend=1 00:05:31.650 --rc geninfo_all_blocks=1 00:05:31.650 --rc geninfo_unexecuted_blocks=1 00:05:31.650 00:05:31.650 ' 00:05:31.650 11:48:29 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:31.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.650 --rc genhtml_branch_coverage=1 00:05:31.650 --rc genhtml_function_coverage=1 00:05:31.650 --rc genhtml_legend=1 00:05:31.650 --rc geninfo_all_blocks=1 00:05:31.650 --rc geninfo_unexecuted_blocks=1 00:05:31.650 00:05:31.650 ' 00:05:31.650 11:48:29 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:31.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.650 --rc genhtml_branch_coverage=1 00:05:31.650 --rc genhtml_function_coverage=1 00:05:31.650 --rc genhtml_legend=1 00:05:31.650 --rc geninfo_all_blocks=1 00:05:31.650 --rc geninfo_unexecuted_blocks=1 00:05:31.650 00:05:31.650 ' 00:05:31.650 11:48:29 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:31.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.650 --rc genhtml_branch_coverage=1 00:05:31.650 --rc genhtml_function_coverage=1 00:05:31.650 --rc genhtml_legend=1 00:05:31.650 --rc geninfo_all_blocks=1 00:05:31.650 --rc geninfo_unexecuted_blocks=1 00:05:31.650 00:05:31.650 ' 00:05:31.650 11:48:29 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:31.650 11:48:29 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:31.650 11:48:29 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=69930 00:05:31.650 11:48:29 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.650 11:48:29 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 69930 00:05:31.650 11:48:29 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 69930 ']' 00:05:31.650 11:48:29 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.650 11:48:29 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.650 11:48:29 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.650 11:48:29 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.650 11:48:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.650 [2024-12-06 11:48:29.386976] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:31.650 [2024-12-06 11:48:29.387187] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69930 ] 00:05:31.909 [2024-12-06 11:48:29.532060] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:31.909 [2024-12-06 11:48:29.585480] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.909 [2024-12-06 11:48:29.585679] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.909 [2024-12-06 11:48:29.585800] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.909 [2024-12-06 11:48:29.586029] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.475 11:48:30 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.475 11:48:30 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:32.475 11:48:30 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:32.475 11:48:30 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.475 11:48:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.475 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:32.475 POWER: Cannot set governor of lcore 0 to userspace 00:05:32.475 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:32.475 POWER: Cannot set governor of lcore 0 to performance 00:05:32.475 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:32.475 POWER: Cannot set governor of lcore 0 to userspace 00:05:32.475 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:32.475 POWER: Unable to set Power Management Environment for lcore 0 00:05:32.475 [2024-12-06 11:48:30.218704] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:32.475 [2024-12-06 11:48:30.218732] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:32.475 [2024-12-06 11:48:30.218779] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:32.475 [2024-12-06 11:48:30.218833] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:32.475 [2024-12-06 11:48:30.218846] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:32.475 [2024-12-06 11:48:30.218878] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:32.475 11:48:30 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.475 11:48:30 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:32.475 11:48:30 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.475 11:48:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.735 [2024-12-06 11:48:30.288831] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:32.735 11:48:30 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.735 11:48:30 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:32.735 11:48:30 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.735 11:48:30 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.735 11:48:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.735 ************************************ 00:05:32.735 START TEST scheduler_create_thread 00:05:32.735 ************************************ 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.735 2 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.735 3 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.735 4 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.735 5 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.735 6 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.735 7 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.735 8 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.735 9 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.735 10 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.735 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.303 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.303 11:48:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:33.303 11:48:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:33.303 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.303 11:48:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.240 11:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.241 11:48:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:34.241 11:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.241 11:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.177 11:48:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.177 11:48:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:35.177 11:48:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:35.177 11:48:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.177 11:48:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.115 11:48:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.115 00:05:36.115 real 0m3.214s 00:05:36.115 user 0m0.022s 00:05:36.115 sys 0m0.012s 00:05:36.115 11:48:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.115 11:48:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.115 ************************************ 00:05:36.115 END TEST scheduler_create_thread 00:05:36.115 ************************************ 00:05:36.115 11:48:33 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:36.115 11:48:33 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 69930 00:05:36.115 11:48:33 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 69930 ']' 00:05:36.115 11:48:33 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 69930 00:05:36.115 11:48:33 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:36.115 11:48:33 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:36.115 11:48:33 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69930 00:05:36.115 killing process with pid 69930 00:05:36.115 11:48:33 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:36.115 11:48:33 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:36.115 11:48:33 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69930' 00:05:36.115 11:48:33 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 69930 00:05:36.115 11:48:33 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 69930 00:05:36.374 [2024-12-06 11:48:33.895334] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:36.632 00:05:36.632 real 0m5.100s 00:05:36.632 user 0m9.898s 00:05:36.632 sys 0m0.473s 00:05:36.632 11:48:34 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.632 ************************************ 00:05:36.632 END TEST event_scheduler 00:05:36.632 ************************************ 00:05:36.632 11:48:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.632 11:48:34 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:36.632 11:48:34 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:36.632 11:48:34 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.632 11:48:34 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.632 11:48:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.632 ************************************ 00:05:36.632 START TEST app_repeat 00:05:36.632 ************************************ 00:05:36.632 11:48:34 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:36.632 11:48:34 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.632 11:48:34 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.632 11:48:34 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:36.632 11:48:34 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.632 11:48:34 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:36.632 11:48:34 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:36.632 11:48:34 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:36.632 11:48:34 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70036 00:05:36.632 11:48:34 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:36.632 11:48:34 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.632 Process app_repeat pid: 70036 00:05:36.632 spdk_app_start Round 0 00:05:36.632 11:48:34 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70036' 00:05:36.632 11:48:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:36.632 11:48:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:36.632 11:48:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70036 /var/tmp/spdk-nbd.sock 00:05:36.632 11:48:34 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70036 ']' 00:05:36.632 11:48:34 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:36.632 11:48:34 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.632 11:48:34 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:36.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:36.632 11:48:34 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.632 11:48:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:36.632 [2024-12-06 11:48:34.320761] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:36.632 [2024-12-06 11:48:34.320891] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70036 ] 00:05:36.890 [2024-12-06 11:48:34.447972] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:36.890 [2024-12-06 11:48:34.493925] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.890 [2024-12-06 11:48:34.494014] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.514 11:48:35 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.514 11:48:35 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:37.514 11:48:35 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.772 Malloc0 00:05:37.772 11:48:35 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.030 Malloc1 00:05:38.030 11:48:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.030 11:48:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.030 11:48:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.030 11:48:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:38.030 11:48:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.030 11:48:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:38.030 11:48:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.030 11:48:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.030 11:48:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.030 11:48:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:38.030 11:48:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.030 11:48:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:38.030 11:48:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:38.030 11:48:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:38.030 11:48:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.030 11:48:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:38.288 /dev/nbd0 00:05:38.288 11:48:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:38.288 11:48:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:38.288 11:48:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:38.288 11:48:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:38.288 11:48:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:38.288 11:48:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:38.288 11:48:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:38.288 11:48:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:38.288 11:48:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:38.288 11:48:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:38.288 11:48:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.288 1+0 records in 00:05:38.288 1+0 records out 00:05:38.288 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271117 s, 15.1 MB/s 00:05:38.288 11:48:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.288 11:48:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:38.288 11:48:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.288 11:48:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:38.288 11:48:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:38.288 11:48:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.288 11:48:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.288 11:48:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:38.545 /dev/nbd1 00:05:38.545 11:48:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:38.545 11:48:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:38.545 11:48:36 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:38.545 11:48:36 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:38.545 11:48:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:38.545 11:48:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:38.545 11:48:36 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:38.545 11:48:36 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:38.545 11:48:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:38.545 11:48:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:38.545 11:48:36 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.545 1+0 records in 00:05:38.545 1+0 records out 00:05:38.545 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381581 s, 10.7 MB/s 00:05:38.545 11:48:36 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.545 11:48:36 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:38.545 11:48:36 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.545 11:48:36 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:38.545 11:48:36 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:38.545 11:48:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.545 11:48:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.545 11:48:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.546 11:48:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.546 11:48:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:38.804 { 00:05:38.804 "nbd_device": "/dev/nbd0", 00:05:38.804 "bdev_name": "Malloc0" 00:05:38.804 }, 00:05:38.804 { 00:05:38.804 "nbd_device": "/dev/nbd1", 00:05:38.804 "bdev_name": "Malloc1" 00:05:38.804 } 00:05:38.804 ]' 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:38.804 { 00:05:38.804 "nbd_device": "/dev/nbd0", 00:05:38.804 "bdev_name": "Malloc0" 00:05:38.804 }, 00:05:38.804 { 00:05:38.804 "nbd_device": "/dev/nbd1", 00:05:38.804 "bdev_name": "Malloc1" 00:05:38.804 } 00:05:38.804 ]' 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:38.804 /dev/nbd1' 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:38.804 /dev/nbd1' 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:38.804 256+0 records in 00:05:38.804 256+0 records out 00:05:38.804 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00435524 s, 241 MB/s 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:38.804 256+0 records in 00:05:38.804 256+0 records out 00:05:38.804 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.03007 s, 34.9 MB/s 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:38.804 256+0 records in 00:05:38.804 256+0 records out 00:05:38.804 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255341 s, 41.1 MB/s 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.804 11:48:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:39.062 11:48:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:39.062 11:48:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:39.062 11:48:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:39.062 11:48:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.062 11:48:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.062 11:48:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:39.062 11:48:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.062 11:48:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.062 11:48:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.062 11:48:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:39.320 11:48:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:39.320 11:48:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:39.320 11:48:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:39.320 11:48:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.320 11:48:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.320 11:48:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:39.320 11:48:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.320 11:48:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.320 11:48:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.320 11:48:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.320 11:48:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.578 11:48:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.578 11:48:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.578 11:48:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.578 11:48:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.578 11:48:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.578 11:48:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.578 11:48:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:39.578 11:48:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.578 11:48:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.578 11:48:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:39.578 11:48:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:39.578 11:48:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:39.578 11:48:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:39.836 11:48:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:39.836 [2024-12-06 11:48:37.517696] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.836 [2024-12-06 11:48:37.558568] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.836 [2024-12-06 11:48:37.558608] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.094 [2024-12-06 11:48:37.600939] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:40.094 [2024-12-06 11:48:37.601004] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:42.627 spdk_app_start Round 1 00:05:42.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:42.627 11:48:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:42.627 11:48:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:42.627 11:48:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70036 /var/tmp/spdk-nbd.sock 00:05:42.627 11:48:40 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70036 ']' 00:05:42.627 11:48:40 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:42.627 11:48:40 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:42.627 11:48:40 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:42.627 11:48:40 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:42.627 11:48:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.886 11:48:40 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.886 11:48:40 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:42.886 11:48:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.144 Malloc0 00:05:43.144 11:48:40 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.403 Malloc1 00:05:43.403 11:48:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.403 11:48:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.403 11:48:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.403 11:48:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:43.403 11:48:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.403 11:48:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:43.403 11:48:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.403 11:48:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.403 11:48:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.403 11:48:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:43.403 11:48:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.403 11:48:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:43.403 11:48:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:43.403 11:48:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:43.403 11:48:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.403 11:48:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:43.663 /dev/nbd0 00:05:43.663 11:48:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:43.663 11:48:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:43.663 11:48:41 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:43.663 11:48:41 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:43.663 11:48:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:43.663 11:48:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:43.663 11:48:41 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:43.663 11:48:41 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:43.663 11:48:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:43.663 11:48:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:43.663 11:48:41 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.663 1+0 records in 00:05:43.663 1+0 records out 00:05:43.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350551 s, 11.7 MB/s 00:05:43.663 11:48:41 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:43.663 11:48:41 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:43.663 11:48:41 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:43.663 11:48:41 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:43.663 11:48:41 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:43.663 11:48:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.663 11:48:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.663 11:48:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:43.923 /dev/nbd1 00:05:43.923 11:48:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:43.923 11:48:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:43.923 11:48:41 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:43.923 11:48:41 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:43.923 11:48:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:43.923 11:48:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:43.923 11:48:41 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:43.923 11:48:41 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:43.923 11:48:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:43.923 11:48:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:43.923 11:48:41 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.923 1+0 records in 00:05:43.923 1+0 records out 00:05:43.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392869 s, 10.4 MB/s 00:05:43.923 11:48:41 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:43.923 11:48:41 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:43.923 11:48:41 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:43.923 11:48:41 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:43.923 11:48:41 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:43.923 11:48:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.923 11:48:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.923 11:48:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.923 11:48:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.923 11:48:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.923 11:48:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:43.923 { 00:05:43.923 "nbd_device": "/dev/nbd0", 00:05:43.923 "bdev_name": "Malloc0" 00:05:43.923 }, 00:05:43.923 { 00:05:43.923 "nbd_device": "/dev/nbd1", 00:05:43.923 "bdev_name": "Malloc1" 00:05:43.923 } 00:05:43.923 ]' 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:44.182 { 00:05:44.182 "nbd_device": "/dev/nbd0", 00:05:44.182 "bdev_name": "Malloc0" 00:05:44.182 }, 00:05:44.182 { 00:05:44.182 "nbd_device": "/dev/nbd1", 00:05:44.182 "bdev_name": "Malloc1" 00:05:44.182 } 00:05:44.182 ]' 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:44.182 /dev/nbd1' 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:44.182 /dev/nbd1' 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:44.182 256+0 records in 00:05:44.182 256+0 records out 00:05:44.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150781 s, 69.5 MB/s 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:44.182 256+0 records in 00:05:44.182 256+0 records out 00:05:44.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236968 s, 44.2 MB/s 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:44.182 256+0 records in 00:05:44.182 256+0 records out 00:05:44.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242766 s, 43.2 MB/s 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:44.182 11:48:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:44.183 11:48:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:44.183 11:48:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.183 11:48:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.183 11:48:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:44.183 11:48:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:44.183 11:48:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.183 11:48:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:44.441 11:48:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:44.442 11:48:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:44.442 11:48:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:44.442 11:48:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.442 11:48:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.442 11:48:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:44.442 11:48:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.442 11:48:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.442 11:48:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.442 11:48:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:44.700 11:48:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:44.700 11:48:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:44.700 11:48:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:44.700 11:48:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.700 11:48:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.700 11:48:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:44.700 11:48:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.700 11:48:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.700 11:48:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.700 11:48:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.700 11:48:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.982 11:48:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:44.982 11:48:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.982 11:48:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:44.982 11:48:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:44.982 11:48:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:44.982 11:48:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.982 11:48:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:44.982 11:48:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:44.982 11:48:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:44.982 11:48:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:44.982 11:48:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:44.982 11:48:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:44.982 11:48:42 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:44.982 11:48:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:45.241 [2024-12-06 11:48:42.895071] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.241 [2024-12-06 11:48:42.935787] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.241 [2024-12-06 11:48:42.935817] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.241 [2024-12-06 11:48:42.977117] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:45.241 [2024-12-06 11:48:42.977173] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:48.530 spdk_app_start Round 2 00:05:48.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.530 11:48:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:48.530 11:48:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:48.530 11:48:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70036 /var/tmp/spdk-nbd.sock 00:05:48.531 11:48:45 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70036 ']' 00:05:48.531 11:48:45 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.531 11:48:45 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.531 11:48:45 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.531 11:48:45 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.531 11:48:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.531 11:48:45 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.531 11:48:45 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:48.531 11:48:45 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.531 Malloc0 00:05:48.531 11:48:46 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.791 Malloc1 00:05:48.791 11:48:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.791 11:48:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.791 11:48:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.791 11:48:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:48.791 11:48:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.791 11:48:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:48.791 11:48:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.791 11:48:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.791 11:48:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.791 11:48:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:48.791 11:48:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.791 11:48:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:48.791 11:48:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:48.791 11:48:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:48.791 11:48:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.791 11:48:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:49.052 /dev/nbd0 00:05:49.052 11:48:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:49.052 11:48:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:49.052 11:48:46 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:49.052 11:48:46 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:49.052 11:48:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:49.052 11:48:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:49.052 11:48:46 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:49.052 11:48:46 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:49.052 11:48:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:49.052 11:48:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:49.052 11:48:46 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.052 1+0 records in 00:05:49.052 1+0 records out 00:05:49.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364061 s, 11.3 MB/s 00:05:49.052 11:48:46 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.052 11:48:46 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:49.052 11:48:46 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.052 11:48:46 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:49.052 11:48:46 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:49.052 11:48:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.052 11:48:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.052 11:48:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:49.313 /dev/nbd1 00:05:49.313 11:48:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:49.313 11:48:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:49.313 11:48:46 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:49.313 11:48:46 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:49.313 11:48:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:49.313 11:48:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:49.313 11:48:46 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:49.313 11:48:46 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:49.313 11:48:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:49.313 11:48:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:49.313 11:48:46 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.313 1+0 records in 00:05:49.313 1+0 records out 00:05:49.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408889 s, 10.0 MB/s 00:05:49.313 11:48:46 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.313 11:48:46 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:49.313 11:48:46 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.313 11:48:46 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:49.313 11:48:46 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:49.313 11:48:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.313 11:48:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.313 11:48:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.313 11:48:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.313 11:48:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.313 11:48:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:49.313 { 00:05:49.313 "nbd_device": "/dev/nbd0", 00:05:49.313 "bdev_name": "Malloc0" 00:05:49.313 }, 00:05:49.313 { 00:05:49.313 "nbd_device": "/dev/nbd1", 00:05:49.313 "bdev_name": "Malloc1" 00:05:49.313 } 00:05:49.313 ]' 00:05:49.573 11:48:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:49.573 { 00:05:49.573 "nbd_device": "/dev/nbd0", 00:05:49.573 "bdev_name": "Malloc0" 00:05:49.573 }, 00:05:49.573 { 00:05:49.573 "nbd_device": "/dev/nbd1", 00:05:49.573 "bdev_name": "Malloc1" 00:05:49.573 } 00:05:49.573 ]' 00:05:49.573 11:48:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.573 11:48:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:49.573 /dev/nbd1' 00:05:49.573 11:48:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:49.573 /dev/nbd1' 00:05:49.573 11:48:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.573 11:48:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:49.573 11:48:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:49.573 11:48:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:49.573 11:48:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:49.573 11:48:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:49.573 11:48:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.573 11:48:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.573 11:48:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:49.573 11:48:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.573 11:48:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:49.573 11:48:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:49.573 256+0 records in 00:05:49.573 256+0 records out 00:05:49.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00492787 s, 213 MB/s 00:05:49.574 11:48:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.574 11:48:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:49.574 256+0 records in 00:05:49.574 256+0 records out 00:05:49.574 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226046 s, 46.4 MB/s 00:05:49.574 11:48:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.574 11:48:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:49.574 256+0 records in 00:05:49.574 256+0 records out 00:05:49.574 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250041 s, 41.9 MB/s 00:05:49.574 11:48:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:49.574 11:48:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.574 11:48:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.574 11:48:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:49.574 11:48:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.574 11:48:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:49.574 11:48:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:49.574 11:48:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.574 11:48:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:49.574 11:48:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.574 11:48:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:49.574 11:48:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.574 11:48:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:49.574 11:48:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.574 11:48:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.574 11:48:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:49.574 11:48:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:49.574 11:48:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.574 11:48:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:49.832 11:48:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:49.832 11:48:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:49.832 11:48:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:49.832 11:48:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.832 11:48:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.832 11:48:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:49.832 11:48:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.832 11:48:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.832 11:48:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.832 11:48:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:50.091 11:48:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:50.091 11:48:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:50.091 11:48:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:50.091 11:48:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.091 11:48:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.091 11:48:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:50.091 11:48:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.091 11:48:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.091 11:48:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.091 11:48:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.091 11:48:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.091 11:48:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:50.091 11:48:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:50.091 11:48:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.350 11:48:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:50.350 11:48:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.350 11:48:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:50.350 11:48:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:50.350 11:48:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:50.350 11:48:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:50.350 11:48:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:50.350 11:48:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:50.350 11:48:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:50.351 11:48:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:50.351 11:48:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:50.610 [2024-12-06 11:48:48.241090] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.610 [2024-12-06 11:48:48.281422] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.610 [2024-12-06 11:48:48.281426] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.610 [2024-12-06 11:48:48.322912] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:50.610 [2024-12-06 11:48:48.323077] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.904 11:48:51 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70036 /var/tmp/spdk-nbd.sock 00:05:53.904 11:48:51 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70036 ']' 00:05:53.904 11:48:51 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.904 11:48:51 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.904 11:48:51 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.904 11:48:51 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.904 11:48:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.904 11:48:51 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.904 11:48:51 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:53.904 11:48:51 event.app_repeat -- event/event.sh@39 -- # killprocess 70036 00:05:53.904 11:48:51 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70036 ']' 00:05:53.904 11:48:51 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70036 00:05:53.904 11:48:51 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:53.904 11:48:51 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:53.904 11:48:51 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70036 00:05:53.904 11:48:51 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:53.904 11:48:51 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:53.904 11:48:51 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70036' 00:05:53.904 killing process with pid 70036 00:05:53.904 11:48:51 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70036 00:05:53.904 11:48:51 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70036 00:05:53.904 spdk_app_start is called in Round 0. 00:05:53.904 Shutdown signal received, stop current app iteration 00:05:53.904 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:05:53.904 spdk_app_start is called in Round 1. 00:05:53.904 Shutdown signal received, stop current app iteration 00:05:53.904 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:05:53.904 spdk_app_start is called in Round 2. 00:05:53.904 Shutdown signal received, stop current app iteration 00:05:53.904 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:05:53.904 spdk_app_start is called in Round 3. 00:05:53.904 Shutdown signal received, stop current app iteration 00:05:53.904 11:48:51 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:53.904 11:48:51 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:53.905 00:05:53.905 real 0m17.272s 00:05:53.905 user 0m38.191s 00:05:53.905 sys 0m2.344s 00:05:53.905 11:48:51 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.905 11:48:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.905 ************************************ 00:05:53.905 END TEST app_repeat 00:05:53.905 ************************************ 00:05:53.905 11:48:51 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:53.905 11:48:51 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:53.905 11:48:51 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.905 11:48:51 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.905 11:48:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.905 ************************************ 00:05:53.905 START TEST cpu_locks 00:05:53.905 ************************************ 00:05:53.905 11:48:51 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:54.166 * Looking for test storage... 00:05:54.166 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:54.166 11:48:51 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:54.166 11:48:51 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:54.166 11:48:51 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:54.166 11:48:51 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:54.166 11:48:51 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.166 11:48:51 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.166 11:48:51 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.166 11:48:51 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.166 11:48:51 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.166 11:48:51 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.166 11:48:51 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.166 11:48:51 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.166 11:48:51 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.166 11:48:51 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.166 11:48:51 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.166 11:48:51 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:54.166 11:48:51 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:54.166 11:48:51 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.166 11:48:51 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.166 11:48:51 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:54.166 11:48:51 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:54.166 11:48:51 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.166 11:48:51 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:54.166 11:48:51 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.166 11:48:51 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:54.166 11:48:51 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:54.166 11:48:51 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.166 11:48:51 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:54.166 11:48:51 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.166 11:48:51 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.166 11:48:51 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.166 11:48:51 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:54.166 11:48:51 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.166 11:48:51 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:54.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.166 --rc genhtml_branch_coverage=1 00:05:54.166 --rc genhtml_function_coverage=1 00:05:54.166 --rc genhtml_legend=1 00:05:54.166 --rc geninfo_all_blocks=1 00:05:54.166 --rc geninfo_unexecuted_blocks=1 00:05:54.166 00:05:54.166 ' 00:05:54.166 11:48:51 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:54.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.166 --rc genhtml_branch_coverage=1 00:05:54.166 --rc genhtml_function_coverage=1 00:05:54.166 --rc genhtml_legend=1 00:05:54.166 --rc geninfo_all_blocks=1 00:05:54.166 --rc geninfo_unexecuted_blocks=1 00:05:54.166 00:05:54.166 ' 00:05:54.166 11:48:51 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:54.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.166 --rc genhtml_branch_coverage=1 00:05:54.166 --rc genhtml_function_coverage=1 00:05:54.166 --rc genhtml_legend=1 00:05:54.166 --rc geninfo_all_blocks=1 00:05:54.166 --rc geninfo_unexecuted_blocks=1 00:05:54.166 00:05:54.166 ' 00:05:54.166 11:48:51 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:54.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.166 --rc genhtml_branch_coverage=1 00:05:54.166 --rc genhtml_function_coverage=1 00:05:54.166 --rc genhtml_legend=1 00:05:54.166 --rc geninfo_all_blocks=1 00:05:54.166 --rc geninfo_unexecuted_blocks=1 00:05:54.166 00:05:54.166 ' 00:05:54.167 11:48:51 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:54.167 11:48:51 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:54.167 11:48:51 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:54.167 11:48:51 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:54.167 11:48:51 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.167 11:48:51 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.167 11:48:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.167 ************************************ 00:05:54.167 START TEST default_locks 00:05:54.167 ************************************ 00:05:54.167 11:48:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:54.167 11:48:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.167 11:48:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70461 00:05:54.167 11:48:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70461 00:05:54.167 11:48:51 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70461 ']' 00:05:54.167 11:48:51 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.167 11:48:51 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.167 11:48:51 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.167 11:48:51 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.167 11:48:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.167 [2024-12-06 11:48:51.920344] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:54.167 [2024-12-06 11:48:51.920541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70461 ] 00:05:54.427 [2024-12-06 11:48:52.064416] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.427 [2024-12-06 11:48:52.108806] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.368 11:48:52 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.368 11:48:52 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:55.368 11:48:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70461 00:05:55.368 11:48:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70461 00:05:55.368 11:48:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.629 11:48:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70461 00:05:55.629 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 70461 ']' 00:05:55.629 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 70461 00:05:55.629 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:55.629 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.629 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70461 00:05:55.629 killing process with pid 70461 00:05:55.629 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.629 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.629 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70461' 00:05:55.629 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 70461 00:05:55.629 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 70461 00:05:55.889 11:48:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70461 00:05:55.889 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:55.889 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70461 00:05:55.889 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:55.889 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.889 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:55.889 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.889 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 70461 00:05:55.889 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70461 ']' 00:05:55.889 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.889 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.889 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.889 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.889 ERROR: process (pid: 70461) is no longer running 00:05:55.889 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.889 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70461) - No such process 00:05:55.889 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.889 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:55.889 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:55.889 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:55.889 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:55.889 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:55.889 11:48:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:55.889 11:48:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:55.889 11:48:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:55.889 11:48:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:55.889 00:05:55.889 real 0m1.783s 00:05:55.889 user 0m1.748s 00:05:55.889 sys 0m0.620s 00:05:55.889 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.889 11:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.889 ************************************ 00:05:55.889 END TEST default_locks 00:05:55.889 ************************************ 00:05:56.150 11:48:53 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:56.150 11:48:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.150 11:48:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.150 11:48:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.150 ************************************ 00:05:56.150 START TEST default_locks_via_rpc 00:05:56.150 ************************************ 00:05:56.150 11:48:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:56.150 11:48:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70514 00:05:56.150 11:48:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70514 00:05:56.150 11:48:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.150 11:48:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70514 ']' 00:05:56.150 11:48:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.150 11:48:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.150 11:48:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.150 11:48:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.150 11:48:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.150 [2024-12-06 11:48:53.796411] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:56.150 [2024-12-06 11:48:53.796636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70514 ] 00:05:56.410 [2024-12-06 11:48:53.943115] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.410 [2024-12-06 11:48:53.987933] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.981 11:48:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.981 11:48:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:56.981 11:48:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:56.981 11:48:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.981 11:48:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.981 11:48:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.981 11:48:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:56.981 11:48:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:56.981 11:48:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:56.981 11:48:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:56.981 11:48:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:56.981 11:48:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.981 11:48:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.981 11:48:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.981 11:48:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70514 00:05:56.981 11:48:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70514 00:05:56.981 11:48:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.591 11:48:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70514 00:05:57.591 11:48:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 70514 ']' 00:05:57.591 11:48:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 70514 00:05:57.591 11:48:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:57.591 11:48:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.591 11:48:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70514 00:05:57.591 killing process with pid 70514 00:05:57.591 11:48:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:57.591 11:48:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:57.591 11:48:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70514' 00:05:57.591 11:48:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 70514 00:05:57.591 11:48:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 70514 00:05:57.851 ************************************ 00:05:57.851 END TEST default_locks_via_rpc 00:05:57.851 ************************************ 00:05:57.851 00:05:57.851 real 0m1.759s 00:05:57.851 user 0m1.708s 00:05:57.851 sys 0m0.618s 00:05:57.851 11:48:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.851 11:48:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.851 11:48:55 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:57.851 11:48:55 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.851 11:48:55 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.851 11:48:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.851 ************************************ 00:05:57.851 START TEST non_locking_app_on_locked_coremask 00:05:57.851 ************************************ 00:05:57.851 11:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:57.851 11:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70561 00:05:57.851 11:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.851 11:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70561 /var/tmp/spdk.sock 00:05:57.851 11:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70561 ']' 00:05:57.851 11:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.851 11:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.851 11:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.851 11:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.851 11:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.110 [2024-12-06 11:48:55.624864] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:58.110 [2024-12-06 11:48:55.624990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70561 ] 00:05:58.110 [2024-12-06 11:48:55.769100] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.110 [2024-12-06 11:48:55.815847] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.680 11:48:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.680 11:48:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:58.680 11:48:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:58.680 11:48:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70577 00:05:58.680 11:48:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70577 /var/tmp/spdk2.sock 00:05:58.680 11:48:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70577 ']' 00:05:58.680 11:48:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.680 11:48:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.680 11:48:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.680 11:48:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.680 11:48:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.940 [2024-12-06 11:48:56.489407] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:58.940 [2024-12-06 11:48:56.489969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70577 ] 00:05:58.940 [2024-12-06 11:48:56.624265] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.940 [2024-12-06 11:48:56.624313] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.200 [2024-12-06 11:48:56.712696] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.770 11:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.770 11:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:59.770 11:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70561 00:05:59.770 11:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70561 00:05:59.770 11:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.030 11:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70561 00:06:00.030 11:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70561 ']' 00:06:00.030 11:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70561 00:06:00.030 11:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:00.030 11:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.030 11:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70561 00:06:00.030 11:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.030 11:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.030 killing process with pid 70561 00:06:00.030 11:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70561' 00:06:00.030 11:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70561 00:06:00.030 11:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70561 00:06:00.969 11:48:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70577 00:06:00.969 11:48:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70577 ']' 00:06:00.969 11:48:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70577 00:06:00.969 11:48:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:00.969 11:48:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.969 11:48:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70577 00:06:00.969 killing process with pid 70577 00:06:00.969 11:48:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.969 11:48:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.969 11:48:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70577' 00:06:00.969 11:48:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70577 00:06:00.969 11:48:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70577 00:06:01.229 00:06:01.229 real 0m3.418s 00:06:01.229 user 0m3.568s 00:06:01.229 sys 0m0.981s 00:06:01.229 11:48:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.229 ************************************ 00:06:01.229 END TEST non_locking_app_on_locked_coremask 00:06:01.229 ************************************ 00:06:01.229 11:48:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.488 11:48:58 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:01.488 11:48:58 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.488 11:48:58 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.488 11:48:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.488 ************************************ 00:06:01.488 START TEST locking_app_on_unlocked_coremask 00:06:01.488 ************************************ 00:06:01.488 11:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:01.488 11:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70640 00:06:01.488 11:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:01.489 11:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 70640 /var/tmp/spdk.sock 00:06:01.489 11:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70640 ']' 00:06:01.489 11:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.489 11:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.489 11:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.489 11:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.489 11:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.489 [2024-12-06 11:48:59.104801] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:01.489 [2024-12-06 11:48:59.105015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70640 ] 00:06:01.748 [2024-12-06 11:48:59.248045] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:01.748 [2024-12-06 11:48:59.248106] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.748 [2024-12-06 11:48:59.293184] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.316 11:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.316 11:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:02.316 11:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=70651 00:06:02.316 11:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:02.316 11:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 70651 /var/tmp/spdk2.sock 00:06:02.316 11:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70651 ']' 00:06:02.316 11:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.316 11:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.316 11:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.316 11:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.316 11:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.316 [2024-12-06 11:49:00.040867] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:02.316 [2024-12-06 11:49:00.041076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70651 ] 00:06:02.575 [2024-12-06 11:49:00.172865] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.575 [2024-12-06 11:49:00.261333] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.144 11:49:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.144 11:49:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:03.144 11:49:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 70651 00:06:03.144 11:49:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70651 00:06:03.144 11:49:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.080 11:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 70640 00:06:04.080 11:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70640 ']' 00:06:04.080 11:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 70640 00:06:04.080 11:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:04.080 11:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.080 11:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70640 00:06:04.080 killing process with pid 70640 00:06:04.080 11:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:04.080 11:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:04.080 11:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70640' 00:06:04.080 11:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 70640 00:06:04.080 11:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 70640 00:06:04.648 11:49:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 70651 00:06:04.648 11:49:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70651 ']' 00:06:04.648 11:49:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 70651 00:06:04.907 11:49:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:04.907 11:49:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.907 11:49:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70651 00:06:04.907 11:49:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:04.907 killing process with pid 70651 00:06:04.907 11:49:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:04.907 11:49:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70651' 00:06:04.907 11:49:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 70651 00:06:04.907 11:49:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 70651 00:06:05.167 ************************************ 00:06:05.167 END TEST locking_app_on_unlocked_coremask 00:06:05.167 ************************************ 00:06:05.167 00:06:05.167 real 0m3.821s 00:06:05.167 user 0m3.975s 00:06:05.167 sys 0m1.225s 00:06:05.167 11:49:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.167 11:49:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.167 11:49:02 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:05.167 11:49:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.167 11:49:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.167 11:49:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.167 ************************************ 00:06:05.167 START TEST locking_app_on_locked_coremask 00:06:05.167 ************************************ 00:06:05.167 11:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:05.167 11:49:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=70720 00:06:05.167 11:49:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.167 11:49:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 70720 /var/tmp/spdk.sock 00:06:05.167 11:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70720 ']' 00:06:05.167 11:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.167 11:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.167 11:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.167 11:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.167 11:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.427 [2024-12-06 11:49:02.992278] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:05.427 [2024-12-06 11:49:02.992419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70720 ] 00:06:05.427 [2024-12-06 11:49:03.136801] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.427 [2024-12-06 11:49:03.182625] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.367 11:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.367 11:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:06.367 11:49:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=70738 00:06:06.367 11:49:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:06.367 11:49:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 70738 /var/tmp/spdk2.sock 00:06:06.367 11:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:06.367 11:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70738 /var/tmp/spdk2.sock 00:06:06.367 11:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:06.367 11:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.367 11:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:06.367 11:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.367 11:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 70738 /var/tmp/spdk2.sock 00:06:06.367 11:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70738 ']' 00:06:06.368 11:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.368 11:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.368 11:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.368 11:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.368 11:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.368 [2024-12-06 11:49:03.882963] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:06.368 [2024-12-06 11:49:03.883519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70738 ] 00:06:06.368 [2024-12-06 11:49:04.019112] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 70720 has claimed it. 00:06:06.368 [2024-12-06 11:49:04.019179] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:06.937 ERROR: process (pid: 70738) is no longer running 00:06:06.937 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70738) - No such process 00:06:06.937 11:49:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.937 11:49:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:06.937 11:49:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:06.938 11:49:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:06.938 11:49:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:06.938 11:49:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:06.938 11:49:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 70720 00:06:06.938 11:49:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70720 00:06:06.938 11:49:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.198 11:49:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 70720 00:06:07.198 11:49:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70720 ']' 00:06:07.198 11:49:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70720 00:06:07.198 11:49:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:07.198 11:49:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.198 11:49:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70720 00:06:07.198 11:49:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:07.198 11:49:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:07.198 11:49:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70720' 00:06:07.198 killing process with pid 70720 00:06:07.198 11:49:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70720 00:06:07.198 11:49:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70720 00:06:07.767 00:06:07.767 real 0m2.320s 00:06:07.767 user 0m2.494s 00:06:07.767 sys 0m0.653s 00:06:07.767 11:49:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.767 11:49:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.767 ************************************ 00:06:07.767 END TEST locking_app_on_locked_coremask 00:06:07.767 ************************************ 00:06:07.767 11:49:05 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:07.767 11:49:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.767 11:49:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.767 11:49:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.767 ************************************ 00:06:07.767 START TEST locking_overlapped_coremask 00:06:07.767 ************************************ 00:06:07.767 11:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:07.767 11:49:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:07.767 11:49:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=70785 00:06:07.767 11:49:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 70785 /var/tmp/spdk.sock 00:06:07.767 11:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 70785 ']' 00:06:07.767 11:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.767 11:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.767 11:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.767 11:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.767 11:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.767 [2024-12-06 11:49:05.387927] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:07.767 [2024-12-06 11:49:05.388066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70785 ] 00:06:08.025 [2024-12-06 11:49:05.534071] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.025 [2024-12-06 11:49:05.579504] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.025 [2024-12-06 11:49:05.579633] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.025 [2024-12-06 11:49:05.579765] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.593 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.593 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:08.593 11:49:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=70798 00:06:08.593 11:49:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:08.593 11:49:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 70798 /var/tmp/spdk2.sock 00:06:08.593 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:08.593 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70798 /var/tmp/spdk2.sock 00:06:08.593 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:08.593 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.593 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:08.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.593 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.593 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 70798 /var/tmp/spdk2.sock 00:06:08.593 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 70798 ']' 00:06:08.593 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.593 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.593 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.593 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.593 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.593 [2024-12-06 11:49:06.283511] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:08.593 [2024-12-06 11:49:06.283643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70798 ] 00:06:08.852 [2024-12-06 11:49:06.419141] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70785 has claimed it. 00:06:08.852 [2024-12-06 11:49:06.419227] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:09.421 ERROR: process (pid: 70798) is no longer running 00:06:09.421 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70798) - No such process 00:06:09.422 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.422 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:09.422 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:09.422 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.422 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:09.422 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.422 11:49:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:09.422 11:49:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:09.422 11:49:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:09.422 11:49:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:09.422 11:49:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 70785 00:06:09.422 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 70785 ']' 00:06:09.422 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 70785 00:06:09.422 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:09.422 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.422 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70785 00:06:09.422 killing process with pid 70785 00:06:09.422 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.422 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.422 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70785' 00:06:09.422 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 70785 00:06:09.422 11:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 70785 00:06:09.685 00:06:09.685 real 0m2.046s 00:06:09.685 user 0m5.408s 00:06:09.685 sys 0m0.524s 00:06:09.685 11:49:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.685 11:49:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.685 ************************************ 00:06:09.685 END TEST locking_overlapped_coremask 00:06:09.685 ************************************ 00:06:09.685 11:49:07 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:09.685 11:49:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.685 11:49:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.685 11:49:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.685 ************************************ 00:06:09.685 START TEST locking_overlapped_coremask_via_rpc 00:06:09.685 ************************************ 00:06:09.685 11:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:09.685 11:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=70845 00:06:09.685 11:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:09.685 11:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 70845 /var/tmp/spdk.sock 00:06:09.685 11:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70845 ']' 00:06:09.685 11:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.685 11:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.685 11:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.685 11:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.685 11:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.954 [2024-12-06 11:49:07.491837] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:09.954 [2024-12-06 11:49:07.491945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70845 ] 00:06:09.954 [2024-12-06 11:49:07.636238] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.954 [2024-12-06 11:49:07.636306] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.954 [2024-12-06 11:49:07.681968] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.954 [2024-12-06 11:49:07.682079] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.954 [2024-12-06 11:49:07.682210] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.533 11:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.533 11:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:10.533 11:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=70858 00:06:10.533 11:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 70858 /var/tmp/spdk2.sock 00:06:10.533 11:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:10.533 11:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70858 ']' 00:06:10.533 11:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.533 11:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.533 11:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.533 11:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.794 11:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.794 [2024-12-06 11:49:08.373496] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:10.794 [2024-12-06 11:49:08.373967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70858 ] 00:06:10.794 [2024-12-06 11:49:08.507940] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.794 [2024-12-06 11:49:08.507984] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:11.054 [2024-12-06 11:49:08.602548] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:11.054 [2024-12-06 11:49:08.602562] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.054 [2024-12-06 11:49:08.602660] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:11.624 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.624 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:11.624 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:11.624 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.624 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.624 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.624 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.624 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:11.624 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.624 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:11.624 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.624 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:11.624 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.624 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.624 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.624 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.624 [2024-12-06 11:49:09.196431] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70845 has claimed it. 00:06:11.624 request: 00:06:11.624 { 00:06:11.624 "method": "framework_enable_cpumask_locks", 00:06:11.624 "req_id": 1 00:06:11.624 } 00:06:11.624 Got JSON-RPC error response 00:06:11.624 response: 00:06:11.624 { 00:06:11.624 "code": -32603, 00:06:11.624 "message": "Failed to claim CPU core: 2" 00:06:11.624 } 00:06:11.624 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:11.624 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:11.624 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:11.624 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:11.624 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:11.624 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 70845 /var/tmp/spdk.sock 00:06:11.624 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70845 ']' 00:06:11.624 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.624 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.624 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.624 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.624 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.885 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.885 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:11.885 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 70858 /var/tmp/spdk2.sock 00:06:11.885 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70858 ']' 00:06:11.885 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.885 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.885 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.885 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.885 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.885 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.885 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:11.885 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:11.885 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:11.885 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:11.885 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:11.885 00:06:11.885 real 0m2.208s 00:06:11.885 user 0m0.988s 00:06:11.885 sys 0m0.152s 00:06:11.885 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.885 11:49:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.885 ************************************ 00:06:11.885 END TEST locking_overlapped_coremask_via_rpc 00:06:11.885 ************************************ 00:06:12.145 11:49:09 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:12.145 11:49:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 70845 ]] 00:06:12.145 11:49:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 70845 00:06:12.145 11:49:09 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 70845 ']' 00:06:12.145 11:49:09 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 70845 00:06:12.145 11:49:09 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:12.145 11:49:09 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:12.146 11:49:09 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70845 00:06:12.146 killing process with pid 70845 00:06:12.146 11:49:09 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:12.146 11:49:09 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:12.146 11:49:09 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70845' 00:06:12.146 11:49:09 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 70845 00:06:12.146 11:49:09 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 70845 00:06:12.406 11:49:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 70858 ]] 00:06:12.406 11:49:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 70858 00:06:12.406 11:49:10 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 70858 ']' 00:06:12.406 11:49:10 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 70858 00:06:12.406 11:49:10 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:12.406 11:49:10 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:12.406 11:49:10 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70858 00:06:12.406 killing process with pid 70858 00:06:12.406 11:49:10 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:12.406 11:49:10 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:12.406 11:49:10 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70858' 00:06:12.406 11:49:10 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 70858 00:06:12.406 11:49:10 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 70858 00:06:12.975 11:49:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:12.975 11:49:10 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:12.975 11:49:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 70845 ]] 00:06:12.975 11:49:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 70845 00:06:12.975 11:49:10 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 70845 ']' 00:06:12.975 11:49:10 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 70845 00:06:12.975 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (70845) - No such process 00:06:12.975 11:49:10 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 70845 is not found' 00:06:12.975 Process with pid 70845 is not found 00:06:12.975 11:49:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 70858 ]] 00:06:12.976 11:49:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 70858 00:06:12.976 11:49:10 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 70858 ']' 00:06:12.976 11:49:10 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 70858 00:06:12.976 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (70858) - No such process 00:06:12.976 Process with pid 70858 is not found 00:06:12.976 11:49:10 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 70858 is not found' 00:06:12.976 11:49:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:12.976 00:06:12.976 real 0m18.934s 00:06:12.976 user 0m31.010s 00:06:12.976 sys 0m5.813s 00:06:12.976 11:49:10 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.976 11:49:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.976 ************************************ 00:06:12.976 END TEST cpu_locks 00:06:12.976 ************************************ 00:06:12.976 00:06:12.976 real 0m45.918s 00:06:12.976 user 1m25.715s 00:06:12.976 sys 0m9.302s 00:06:12.976 11:49:10 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.976 11:49:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.976 ************************************ 00:06:12.976 END TEST event 00:06:12.976 ************************************ 00:06:12.976 11:49:10 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:12.976 11:49:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.976 11:49:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.976 11:49:10 -- common/autotest_common.sh@10 -- # set +x 00:06:12.976 ************************************ 00:06:12.976 START TEST thread 00:06:12.976 ************************************ 00:06:12.976 11:49:10 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:13.237 * Looking for test storage... 00:06:13.237 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:13.237 11:49:10 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:13.237 11:49:10 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:13.237 11:49:10 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:13.237 11:49:10 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:13.237 11:49:10 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.237 11:49:10 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.237 11:49:10 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.237 11:49:10 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.237 11:49:10 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.237 11:49:10 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.237 11:49:10 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.237 11:49:10 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.237 11:49:10 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.237 11:49:10 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.237 11:49:10 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.237 11:49:10 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:13.237 11:49:10 thread -- scripts/common.sh@345 -- # : 1 00:06:13.237 11:49:10 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.237 11:49:10 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.237 11:49:10 thread -- scripts/common.sh@365 -- # decimal 1 00:06:13.237 11:49:10 thread -- scripts/common.sh@353 -- # local d=1 00:06:13.237 11:49:10 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.237 11:49:10 thread -- scripts/common.sh@355 -- # echo 1 00:06:13.237 11:49:10 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.237 11:49:10 thread -- scripts/common.sh@366 -- # decimal 2 00:06:13.237 11:49:10 thread -- scripts/common.sh@353 -- # local d=2 00:06:13.237 11:49:10 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.237 11:49:10 thread -- scripts/common.sh@355 -- # echo 2 00:06:13.237 11:49:10 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.237 11:49:10 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.237 11:49:10 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.237 11:49:10 thread -- scripts/common.sh@368 -- # return 0 00:06:13.237 11:49:10 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.237 11:49:10 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:13.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.237 --rc genhtml_branch_coverage=1 00:06:13.237 --rc genhtml_function_coverage=1 00:06:13.237 --rc genhtml_legend=1 00:06:13.237 --rc geninfo_all_blocks=1 00:06:13.237 --rc geninfo_unexecuted_blocks=1 00:06:13.237 00:06:13.237 ' 00:06:13.237 11:49:10 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:13.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.237 --rc genhtml_branch_coverage=1 00:06:13.237 --rc genhtml_function_coverage=1 00:06:13.237 --rc genhtml_legend=1 00:06:13.237 --rc geninfo_all_blocks=1 00:06:13.237 --rc geninfo_unexecuted_blocks=1 00:06:13.237 00:06:13.237 ' 00:06:13.237 11:49:10 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:13.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.237 --rc genhtml_branch_coverage=1 00:06:13.237 --rc genhtml_function_coverage=1 00:06:13.237 --rc genhtml_legend=1 00:06:13.237 --rc geninfo_all_blocks=1 00:06:13.237 --rc geninfo_unexecuted_blocks=1 00:06:13.237 00:06:13.237 ' 00:06:13.237 11:49:10 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:13.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.237 --rc genhtml_branch_coverage=1 00:06:13.237 --rc genhtml_function_coverage=1 00:06:13.237 --rc genhtml_legend=1 00:06:13.237 --rc geninfo_all_blocks=1 00:06:13.237 --rc geninfo_unexecuted_blocks=1 00:06:13.237 00:06:13.237 ' 00:06:13.237 11:49:10 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:13.237 11:49:10 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:13.237 11:49:10 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.237 11:49:10 thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.237 ************************************ 00:06:13.237 START TEST thread_poller_perf 00:06:13.237 ************************************ 00:06:13.237 11:49:10 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:13.237 [2024-12-06 11:49:10.934168] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:13.237 [2024-12-06 11:49:10.934324] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70996 ] 00:06:13.499 [2024-12-06 11:49:11.077743] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.499 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:13.499 [2024-12-06 11:49:11.128498] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.883 [2024-12-06T11:49:12.641Z] ====================================== 00:06:14.883 [2024-12-06T11:49:12.641Z] busy:2297259126 (cyc) 00:06:14.883 [2024-12-06T11:49:12.641Z] total_run_count: 431000 00:06:14.883 [2024-12-06T11:49:12.641Z] tsc_hz: 2290000000 (cyc) 00:06:14.883 [2024-12-06T11:49:12.641Z] ====================================== 00:06:14.883 [2024-12-06T11:49:12.641Z] poller_cost: 5330 (cyc), 2327 (nsec) 00:06:14.883 00:06:14.883 real 0m1.329s 00:06:14.883 user 0m1.136s 00:06:14.883 sys 0m0.088s 00:06:14.883 11:49:12 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.883 ************************************ 00:06:14.883 END TEST thread_poller_perf 00:06:14.883 ************************************ 00:06:14.883 11:49:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:14.883 11:49:12 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:14.883 11:49:12 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:14.883 11:49:12 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.883 11:49:12 thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.883 ************************************ 00:06:14.883 START TEST thread_poller_perf 00:06:14.883 ************************************ 00:06:14.883 11:49:12 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:14.883 [2024-12-06 11:49:12.333938] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:14.883 [2024-12-06 11:49:12.334060] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71027 ] 00:06:14.883 [2024-12-06 11:49:12.478557] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.883 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:14.883 [2024-12-06 11:49:12.522541] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.266 [2024-12-06T11:49:14.024Z] ====================================== 00:06:16.266 [2024-12-06T11:49:14.024Z] busy:2293188324 (cyc) 00:06:16.266 [2024-12-06T11:49:14.024Z] total_run_count: 5659000 00:06:16.266 [2024-12-06T11:49:14.024Z] tsc_hz: 2290000000 (cyc) 00:06:16.266 [2024-12-06T11:49:14.024Z] ====================================== 00:06:16.266 [2024-12-06T11:49:14.024Z] poller_cost: 405 (cyc), 176 (nsec) 00:06:16.266 00:06:16.266 real 0m1.320s 00:06:16.266 user 0m1.131s 00:06:16.266 sys 0m0.083s 00:06:16.266 11:49:13 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.266 11:49:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:16.266 ************************************ 00:06:16.266 END TEST thread_poller_perf 00:06:16.266 ************************************ 00:06:16.266 11:49:13 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:16.266 00:06:16.266 real 0m3.003s 00:06:16.266 user 0m2.441s 00:06:16.266 sys 0m0.365s 00:06:16.266 11:49:13 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.266 11:49:13 thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.266 ************************************ 00:06:16.266 END TEST thread 00:06:16.266 ************************************ 00:06:16.266 11:49:13 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:16.266 11:49:13 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:16.266 11:49:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.266 11:49:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.266 11:49:13 -- common/autotest_common.sh@10 -- # set +x 00:06:16.266 ************************************ 00:06:16.266 START TEST app_cmdline 00:06:16.266 ************************************ 00:06:16.266 11:49:13 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:16.266 * Looking for test storage... 00:06:16.266 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:16.266 11:49:13 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:16.266 11:49:13 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:16.266 11:49:13 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:16.266 11:49:13 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:16.266 11:49:13 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.266 11:49:13 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.266 11:49:13 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.266 11:49:13 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.266 11:49:13 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.266 11:49:13 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.266 11:49:13 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.266 11:49:13 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.266 11:49:13 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.266 11:49:13 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.266 11:49:13 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.266 11:49:13 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:16.266 11:49:13 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:16.266 11:49:13 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.266 11:49:13 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.266 11:49:13 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:16.266 11:49:13 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:16.266 11:49:13 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.266 11:49:13 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:16.266 11:49:13 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.266 11:49:13 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:16.266 11:49:13 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:16.266 11:49:13 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.266 11:49:13 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:16.266 11:49:13 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.266 11:49:13 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.266 11:49:13 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.266 11:49:13 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:16.266 11:49:13 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.266 11:49:13 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:16.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.266 --rc genhtml_branch_coverage=1 00:06:16.266 --rc genhtml_function_coverage=1 00:06:16.266 --rc genhtml_legend=1 00:06:16.266 --rc geninfo_all_blocks=1 00:06:16.266 --rc geninfo_unexecuted_blocks=1 00:06:16.266 00:06:16.266 ' 00:06:16.266 11:49:13 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:16.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.266 --rc genhtml_branch_coverage=1 00:06:16.266 --rc genhtml_function_coverage=1 00:06:16.266 --rc genhtml_legend=1 00:06:16.266 --rc geninfo_all_blocks=1 00:06:16.266 --rc geninfo_unexecuted_blocks=1 00:06:16.266 00:06:16.266 ' 00:06:16.266 11:49:13 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:16.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.266 --rc genhtml_branch_coverage=1 00:06:16.266 --rc genhtml_function_coverage=1 00:06:16.266 --rc genhtml_legend=1 00:06:16.266 --rc geninfo_all_blocks=1 00:06:16.266 --rc geninfo_unexecuted_blocks=1 00:06:16.266 00:06:16.266 ' 00:06:16.266 11:49:13 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:16.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.266 --rc genhtml_branch_coverage=1 00:06:16.266 --rc genhtml_function_coverage=1 00:06:16.266 --rc genhtml_legend=1 00:06:16.266 --rc geninfo_all_blocks=1 00:06:16.266 --rc geninfo_unexecuted_blocks=1 00:06:16.266 00:06:16.266 ' 00:06:16.266 11:49:13 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:16.266 11:49:13 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71116 00:06:16.266 11:49:13 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:16.266 11:49:13 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71116 00:06:16.266 11:49:13 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 71116 ']' 00:06:16.266 11:49:13 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.266 11:49:13 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.266 11:49:13 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.266 11:49:13 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.266 11:49:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:16.526 [2024-12-06 11:49:14.044394] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:16.526 [2024-12-06 11:49:14.044522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71116 ] 00:06:16.526 [2024-12-06 11:49:14.187850] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.526 [2024-12-06 11:49:14.231473] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.096 11:49:14 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.096 11:49:14 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:17.096 11:49:14 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:17.357 { 00:06:17.357 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:06:17.357 "fields": { 00:06:17.357 "major": 24, 00:06:17.357 "minor": 9, 00:06:17.357 "patch": 1, 00:06:17.357 "suffix": "-pre", 00:06:17.357 "commit": "b18e1bd62" 00:06:17.357 } 00:06:17.357 } 00:06:17.357 11:49:15 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:17.357 11:49:15 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:17.357 11:49:15 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:17.357 11:49:15 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:17.357 11:49:15 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:17.357 11:49:15 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:17.357 11:49:15 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:17.357 11:49:15 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.357 11:49:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:17.357 11:49:15 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.357 11:49:15 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:17.357 11:49:15 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:17.357 11:49:15 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:17.357 11:49:15 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:17.357 11:49:15 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:17.357 11:49:15 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:17.357 11:49:15 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.357 11:49:15 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:17.357 11:49:15 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.357 11:49:15 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:17.357 11:49:15 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.357 11:49:15 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:17.357 11:49:15 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:17.357 11:49:15 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:17.617 request: 00:06:17.617 { 00:06:17.617 "method": "env_dpdk_get_mem_stats", 00:06:17.617 "req_id": 1 00:06:17.617 } 00:06:17.617 Got JSON-RPC error response 00:06:17.617 response: 00:06:17.617 { 00:06:17.617 "code": -32601, 00:06:17.617 "message": "Method not found" 00:06:17.617 } 00:06:17.617 11:49:15 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:17.617 11:49:15 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.617 11:49:15 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:17.617 11:49:15 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.617 11:49:15 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71116 00:06:17.617 11:49:15 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 71116 ']' 00:06:17.617 11:49:15 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 71116 00:06:17.617 11:49:15 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:17.617 11:49:15 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.617 11:49:15 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71116 00:06:17.617 killing process with pid 71116 00:06:17.617 11:49:15 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.617 11:49:15 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.617 11:49:15 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71116' 00:06:17.617 11:49:15 app_cmdline -- common/autotest_common.sh@969 -- # kill 71116 00:06:17.617 11:49:15 app_cmdline -- common/autotest_common.sh@974 -- # wait 71116 00:06:18.185 ************************************ 00:06:18.185 END TEST app_cmdline 00:06:18.185 ************************************ 00:06:18.185 00:06:18.185 real 0m1.971s 00:06:18.185 user 0m2.175s 00:06:18.185 sys 0m0.539s 00:06:18.185 11:49:15 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.185 11:49:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:18.185 11:49:15 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:18.185 11:49:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.185 11:49:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.185 11:49:15 -- common/autotest_common.sh@10 -- # set +x 00:06:18.185 ************************************ 00:06:18.185 START TEST version 00:06:18.185 ************************************ 00:06:18.185 11:49:15 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:18.185 * Looking for test storage... 00:06:18.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:18.185 11:49:15 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:18.185 11:49:15 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:18.185 11:49:15 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:18.446 11:49:15 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:18.446 11:49:15 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.446 11:49:15 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.446 11:49:15 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.446 11:49:15 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.446 11:49:15 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.446 11:49:15 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.446 11:49:15 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.446 11:49:15 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.446 11:49:15 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.446 11:49:15 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.446 11:49:15 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.446 11:49:15 version -- scripts/common.sh@344 -- # case "$op" in 00:06:18.446 11:49:15 version -- scripts/common.sh@345 -- # : 1 00:06:18.446 11:49:15 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.446 11:49:15 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.446 11:49:15 version -- scripts/common.sh@365 -- # decimal 1 00:06:18.446 11:49:15 version -- scripts/common.sh@353 -- # local d=1 00:06:18.446 11:49:15 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.446 11:49:15 version -- scripts/common.sh@355 -- # echo 1 00:06:18.446 11:49:15 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.446 11:49:15 version -- scripts/common.sh@366 -- # decimal 2 00:06:18.446 11:49:15 version -- scripts/common.sh@353 -- # local d=2 00:06:18.446 11:49:15 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.446 11:49:15 version -- scripts/common.sh@355 -- # echo 2 00:06:18.446 11:49:15 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.446 11:49:15 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.446 11:49:15 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.446 11:49:15 version -- scripts/common.sh@368 -- # return 0 00:06:18.446 11:49:15 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.446 11:49:15 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:18.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.446 --rc genhtml_branch_coverage=1 00:06:18.446 --rc genhtml_function_coverage=1 00:06:18.446 --rc genhtml_legend=1 00:06:18.446 --rc geninfo_all_blocks=1 00:06:18.446 --rc geninfo_unexecuted_blocks=1 00:06:18.446 00:06:18.446 ' 00:06:18.446 11:49:15 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:18.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.446 --rc genhtml_branch_coverage=1 00:06:18.446 --rc genhtml_function_coverage=1 00:06:18.446 --rc genhtml_legend=1 00:06:18.446 --rc geninfo_all_blocks=1 00:06:18.446 --rc geninfo_unexecuted_blocks=1 00:06:18.446 00:06:18.446 ' 00:06:18.446 11:49:15 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:18.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.446 --rc genhtml_branch_coverage=1 00:06:18.446 --rc genhtml_function_coverage=1 00:06:18.446 --rc genhtml_legend=1 00:06:18.446 --rc geninfo_all_blocks=1 00:06:18.446 --rc geninfo_unexecuted_blocks=1 00:06:18.446 00:06:18.446 ' 00:06:18.446 11:49:15 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:18.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.446 --rc genhtml_branch_coverage=1 00:06:18.446 --rc genhtml_function_coverage=1 00:06:18.446 --rc genhtml_legend=1 00:06:18.446 --rc geninfo_all_blocks=1 00:06:18.446 --rc geninfo_unexecuted_blocks=1 00:06:18.446 00:06:18.446 ' 00:06:18.446 11:49:15 version -- app/version.sh@17 -- # get_header_version major 00:06:18.446 11:49:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:18.446 11:49:15 version -- app/version.sh@14 -- # cut -f2 00:06:18.446 11:49:15 version -- app/version.sh@14 -- # tr -d '"' 00:06:18.446 11:49:16 version -- app/version.sh@17 -- # major=24 00:06:18.446 11:49:16 version -- app/version.sh@18 -- # get_header_version minor 00:06:18.446 11:49:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:18.446 11:49:16 version -- app/version.sh@14 -- # cut -f2 00:06:18.446 11:49:16 version -- app/version.sh@14 -- # tr -d '"' 00:06:18.446 11:49:16 version -- app/version.sh@18 -- # minor=9 00:06:18.446 11:49:16 version -- app/version.sh@19 -- # get_header_version patch 00:06:18.446 11:49:16 version -- app/version.sh@14 -- # cut -f2 00:06:18.447 11:49:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:18.447 11:49:16 version -- app/version.sh@14 -- # tr -d '"' 00:06:18.447 11:49:16 version -- app/version.sh@19 -- # patch=1 00:06:18.447 11:49:16 version -- app/version.sh@20 -- # get_header_version suffix 00:06:18.447 11:49:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:18.447 11:49:16 version -- app/version.sh@14 -- # cut -f2 00:06:18.447 11:49:16 version -- app/version.sh@14 -- # tr -d '"' 00:06:18.447 11:49:16 version -- app/version.sh@20 -- # suffix=-pre 00:06:18.447 11:49:16 version -- app/version.sh@22 -- # version=24.9 00:06:18.447 11:49:16 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:18.447 11:49:16 version -- app/version.sh@25 -- # version=24.9.1 00:06:18.447 11:49:16 version -- app/version.sh@28 -- # version=24.9.1rc0 00:06:18.447 11:49:16 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:18.447 11:49:16 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:18.447 11:49:16 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:06:18.447 11:49:16 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:06:18.447 ************************************ 00:06:18.447 END TEST version 00:06:18.447 ************************************ 00:06:18.447 00:06:18.447 real 0m0.317s 00:06:18.447 user 0m0.185s 00:06:18.447 sys 0m0.182s 00:06:18.447 11:49:16 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.447 11:49:16 version -- common/autotest_common.sh@10 -- # set +x 00:06:18.447 11:49:16 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:18.447 11:49:16 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:18.447 11:49:16 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:18.447 11:49:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.447 11:49:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.447 11:49:16 -- common/autotest_common.sh@10 -- # set +x 00:06:18.447 ************************************ 00:06:18.447 START TEST bdev_raid 00:06:18.447 ************************************ 00:06:18.447 11:49:16 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:18.706 * Looking for test storage... 00:06:18.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:18.706 11:49:16 bdev_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:18.706 11:49:16 bdev_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:06:18.706 11:49:16 bdev_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:18.706 11:49:16 bdev_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:18.706 11:49:16 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.706 11:49:16 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.706 11:49:16 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.706 11:49:16 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.706 11:49:16 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.706 11:49:16 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.706 11:49:16 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.706 11:49:16 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.706 11:49:16 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.706 11:49:16 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.706 11:49:16 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.706 11:49:16 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:18.706 11:49:16 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:18.706 11:49:16 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.706 11:49:16 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.706 11:49:16 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:18.706 11:49:16 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:18.706 11:49:16 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.706 11:49:16 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:18.706 11:49:16 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.706 11:49:16 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:18.706 11:49:16 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:18.706 11:49:16 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.706 11:49:16 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:18.706 11:49:16 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.706 11:49:16 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.706 11:49:16 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.706 11:49:16 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:18.706 11:49:16 bdev_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.706 11:49:16 bdev_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:18.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.706 --rc genhtml_branch_coverage=1 00:06:18.706 --rc genhtml_function_coverage=1 00:06:18.706 --rc genhtml_legend=1 00:06:18.706 --rc geninfo_all_blocks=1 00:06:18.706 --rc geninfo_unexecuted_blocks=1 00:06:18.706 00:06:18.706 ' 00:06:18.706 11:49:16 bdev_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:18.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.706 --rc genhtml_branch_coverage=1 00:06:18.706 --rc genhtml_function_coverage=1 00:06:18.706 --rc genhtml_legend=1 00:06:18.706 --rc geninfo_all_blocks=1 00:06:18.706 --rc geninfo_unexecuted_blocks=1 00:06:18.706 00:06:18.706 ' 00:06:18.706 11:49:16 bdev_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:18.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.706 --rc genhtml_branch_coverage=1 00:06:18.706 --rc genhtml_function_coverage=1 00:06:18.706 --rc genhtml_legend=1 00:06:18.706 --rc geninfo_all_blocks=1 00:06:18.706 --rc geninfo_unexecuted_blocks=1 00:06:18.706 00:06:18.706 ' 00:06:18.706 11:49:16 bdev_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:18.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.706 --rc genhtml_branch_coverage=1 00:06:18.706 --rc genhtml_function_coverage=1 00:06:18.706 --rc genhtml_legend=1 00:06:18.706 --rc geninfo_all_blocks=1 00:06:18.706 --rc geninfo_unexecuted_blocks=1 00:06:18.706 00:06:18.706 ' 00:06:18.706 11:49:16 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:18.706 11:49:16 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:18.706 11:49:16 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:18.706 11:49:16 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:18.706 11:49:16 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:18.706 11:49:16 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:18.706 11:49:16 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:18.706 11:49:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.706 11:49:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.706 11:49:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:18.706 ************************************ 00:06:18.706 START TEST raid1_resize_data_offset_test 00:06:18.706 ************************************ 00:06:18.706 11:49:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:06:18.706 11:49:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=71276 00:06:18.706 11:49:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 71276' 00:06:18.706 Process raid pid: 71276 00:06:18.706 11:49:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:18.706 11:49:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 71276 00:06:18.706 11:49:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 71276 ']' 00:06:18.706 11:49:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.706 11:49:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.706 11:49:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.706 11:49:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.706 11:49:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.965 [2024-12-06 11:49:16.483823] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:18.965 [2024-12-06 11:49:16.484054] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:18.965 [2024-12-06 11:49:16.628613] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.965 [2024-12-06 11:49:16.672315] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.965 [2024-12-06 11:49:16.713760] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:18.965 [2024-12-06 11:49:16.713869] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:19.531 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.531 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:06:19.531 11:49:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:19.531 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.531 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.791 malloc0 00:06:19.791 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.791 11:49:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:19.791 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.791 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.791 malloc1 00:06:19.791 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.791 11:49:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:19.791 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.791 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.791 null0 00:06:19.791 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.791 11:49:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:19.791 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.791 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.791 [2024-12-06 11:49:17.361960] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:19.791 [2024-12-06 11:49:17.363823] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:19.791 [2024-12-06 11:49:17.363944] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:19.791 [2024-12-06 11:49:17.364072] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:19.791 [2024-12-06 11:49:17.364084] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:19.791 [2024-12-06 11:49:17.364341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:06:19.791 [2024-12-06 11:49:17.364482] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:19.791 [2024-12-06 11:49:17.364495] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:19.791 [2024-12-06 11:49:17.364611] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:19.791 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.791 11:49:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:19.791 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.791 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.791 11:49:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:19.791 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.791 11:49:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:19.791 11:49:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:19.791 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.791 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.791 [2024-12-06 11:49:17.425840] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:19.791 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.791 11:49:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:19.791 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.791 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.051 malloc2 00:06:20.051 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.051 11:49:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:20.051 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.051 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.051 [2024-12-06 11:49:17.559707] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:20.051 [2024-12-06 11:49:17.564702] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:20.051 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.051 [2024-12-06 11:49:17.566939] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:20.051 11:49:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:20.051 11:49:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:20.051 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.051 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.051 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.051 11:49:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:20.051 11:49:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 71276 00:06:20.051 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 71276 ']' 00:06:20.051 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 71276 00:06:20.051 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:06:20.051 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.051 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71276 00:06:20.051 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:20.051 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:20.051 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71276' 00:06:20.051 killing process with pid 71276 00:06:20.051 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 71276 00:06:20.051 [2024-12-06 11:49:17.644603] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:20.051 11:49:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 71276 00:06:20.051 [2024-12-06 11:49:17.644949] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:20.051 [2024-12-06 11:49:17.645045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:20.051 [2024-12-06 11:49:17.645086] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:20.051 [2024-12-06 11:49:17.650402] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:20.051 [2024-12-06 11:49:17.650724] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:20.051 [2024-12-06 11:49:17.650776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:20.311 [2024-12-06 11:49:17.859484] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:20.572 11:49:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:20.572 00:06:20.572 real 0m1.681s 00:06:20.572 user 0m1.653s 00:06:20.572 sys 0m0.420s 00:06:20.572 ************************************ 00:06:20.572 END TEST raid1_resize_data_offset_test 00:06:20.572 ************************************ 00:06:20.572 11:49:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.572 11:49:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.572 11:49:18 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:20.572 11:49:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:20.572 11:49:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.572 11:49:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:20.572 ************************************ 00:06:20.572 START TEST raid0_resize_superblock_test 00:06:20.572 ************************************ 00:06:20.572 11:49:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:06:20.572 11:49:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:20.572 11:49:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71332 00:06:20.572 11:49:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:20.572 11:49:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71332' 00:06:20.572 Process raid pid: 71332 00:06:20.572 11:49:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71332 00:06:20.572 11:49:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71332 ']' 00:06:20.572 11:49:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.572 11:49:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.572 11:49:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.572 11:49:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.572 11:49:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.572 [2024-12-06 11:49:18.233291] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:20.573 [2024-12-06 11:49:18.233502] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:20.833 [2024-12-06 11:49:18.377047] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.833 [2024-12-06 11:49:18.421636] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.833 [2024-12-06 11:49:18.463153] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:20.833 [2024-12-06 11:49:18.463287] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:21.403 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.403 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:21.403 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:21.403 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.403 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.664 malloc0 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.664 [2024-12-06 11:49:19.180566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:21.664 [2024-12-06 11:49:19.180627] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:21.664 [2024-12-06 11:49:19.180654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:06:21.664 [2024-12-06 11:49:19.180673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:21.664 [2024-12-06 11:49:19.182728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:21.664 [2024-12-06 11:49:19.182830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:21.664 pt0 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.664 37367532-3b94-4ac2-82b9-a91421af1a19 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.664 42f414be-e39f-45f5-8140-786d6e2de576 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.664 971320be-43b7-4106-9d93-e7dc717df98b 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.664 [2024-12-06 11:49:19.315292] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 42f414be-e39f-45f5-8140-786d6e2de576 is claimed 00:06:21.664 [2024-12-06 11:49:19.315433] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 971320be-43b7-4106-9d93-e7dc717df98b is claimed 00:06:21.664 [2024-12-06 11:49:19.315560] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:21.664 [2024-12-06 11:49:19.315573] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:21.664 [2024-12-06 11:49:19.315826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:21.664 [2024-12-06 11:49:19.315968] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:21.664 [2024-12-06 11:49:19.315978] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:21.664 [2024-12-06 11:49:19.316108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:21.664 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:21.664 [2024-12-06 11:49:19.411375] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:21.925 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.925 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:21.925 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:21.925 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:21.925 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:21.925 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.925 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.925 [2024-12-06 11:49:19.463294] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:21.925 [2024-12-06 11:49:19.463316] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '42f414be-e39f-45f5-8140-786d6e2de576' was resized: old size 131072, new size 204800 00:06:21.925 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.925 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:21.925 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.925 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.925 [2024-12-06 11:49:19.475247] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:21.925 [2024-12-06 11:49:19.475267] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '971320be-43b7-4106-9d93-e7dc717df98b' was resized: old size 131072, new size 204800 00:06:21.925 [2024-12-06 11:49:19.475298] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:21.925 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.925 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:21.925 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.925 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.925 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:21.925 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.925 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:21.925 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:21.925 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:21.925 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.925 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.925 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.925 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:21.925 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:21.925 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:21.926 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:21.926 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:21.926 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.926 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.926 [2024-12-06 11:49:19.587346] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:21.926 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.926 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:21.926 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:21.926 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:21.926 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:21.926 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.926 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.926 [2024-12-06 11:49:19.631154] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:21.926 [2024-12-06 11:49:19.631220] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:21.926 [2024-12-06 11:49:19.631254] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:21.926 [2024-12-06 11:49:19.631270] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:21.926 [2024-12-06 11:49:19.631367] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:21.926 [2024-12-06 11:49:19.631398] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:21.926 [2024-12-06 11:49:19.631409] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:21.926 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.926 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:21.926 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.926 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.926 [2024-12-06 11:49:19.643121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:21.926 [2024-12-06 11:49:19.643168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:21.926 [2024-12-06 11:49:19.643200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:21.926 [2024-12-06 11:49:19.643210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:21.926 [2024-12-06 11:49:19.645198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:21.926 [2024-12-06 11:49:19.645245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:21.926 [2024-12-06 11:49:19.646585] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 42f414be-e39f-45f5-8140-786d6e2de576 00:06:21.926 [2024-12-06 11:49:19.646638] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 42f414be-e39f-45f5-8140-786d6e2de576 is claimed 00:06:21.926 [2024-12-06 11:49:19.646713] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 971320be-43b7-4106-9d93-e7dc717df98b 00:06:21.926 [2024-12-06 11:49:19.646734] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 971320be-43b7-4106-9d93-e7dc717df98b is claimed 00:06:21.926 [2024-12-06 11:49:19.646815] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 971320be-43b7-4106-9d93-e7dc717df98b (2) smaller than existing raid bdev Raid (3) 00:06:21.926 [2024-12-06 11:49:19.646834] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 42f414be-e39f-45f5-8140-786d6e2de576: File exists 00:06:21.926 [2024-12-06 11:49:19.646872] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:06:21.926 [2024-12-06 11:49:19.646881] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:21.926 [2024-12-06 11:49:19.647106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:06:21.926 [2024-12-06 11:49:19.647257] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:06:21.926 [2024-12-06 11:49:19.647267] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001580 00:06:21.926 [2024-12-06 11:49:19.647404] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:21.926 pt0 00:06:21.926 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.926 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:21.926 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.926 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.926 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.926 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:21.926 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:21.926 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:21.926 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:21.926 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.926 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.926 [2024-12-06 11:49:19.667551] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:22.187 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.187 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:22.187 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:22.187 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:22.187 11:49:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71332 00:06:22.187 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71332 ']' 00:06:22.187 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71332 00:06:22.187 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:22.187 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.187 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71332 00:06:22.187 killing process with pid 71332 00:06:22.187 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.187 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.187 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71332' 00:06:22.187 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71332 00:06:22.187 [2024-12-06 11:49:19.753340] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:22.187 [2024-12-06 11:49:19.753390] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:22.187 [2024-12-06 11:49:19.753424] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:22.187 [2024-12-06 11:49:19.753431] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Raid, state offline 00:06:22.187 11:49:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71332 00:06:22.187 [2024-12-06 11:49:19.911900] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:22.447 11:49:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:22.447 00:06:22.447 real 0m1.993s 00:06:22.447 user 0m2.252s 00:06:22.447 sys 0m0.499s 00:06:22.447 11:49:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.447 ************************************ 00:06:22.447 END TEST raid0_resize_superblock_test 00:06:22.447 ************************************ 00:06:22.447 11:49:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.707 11:49:20 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:22.707 11:49:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:22.707 11:49:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.707 11:49:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:22.707 ************************************ 00:06:22.707 START TEST raid1_resize_superblock_test 00:06:22.707 ************************************ 00:06:22.707 11:49:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:06:22.707 11:49:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:22.707 11:49:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71403 00:06:22.707 11:49:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:22.707 11:49:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71403' 00:06:22.707 Process raid pid: 71403 00:06:22.707 11:49:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71403 00:06:22.707 11:49:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71403 ']' 00:06:22.707 11:49:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.707 11:49:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.707 11:49:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.707 11:49:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.707 11:49:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.707 [2024-12-06 11:49:20.303819] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:22.707 [2024-12-06 11:49:20.304022] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:22.707 [2024-12-06 11:49:20.429804] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.967 [2024-12-06 11:49:20.474570] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.967 [2024-12-06 11:49:20.517134] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:22.967 [2024-12-06 11:49:20.517277] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:23.536 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.536 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:23.536 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:23.536 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.536 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.536 malloc0 00:06:23.536 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.536 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:23.536 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.536 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.536 [2024-12-06 11:49:21.244205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:23.536 [2024-12-06 11:49:21.244275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:23.536 [2024-12-06 11:49:21.244299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:06:23.536 [2024-12-06 11:49:21.244310] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:23.536 [2024-12-06 11:49:21.246457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:23.536 [2024-12-06 11:49:21.246537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:23.536 pt0 00:06:23.536 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.536 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:23.536 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.536 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.797 25ffe36f-ea18-466e-bf9b-a305ccdbf213 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.797 bb865278-950f-438d-bc77-f47c62e171b8 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.797 1604b5bd-9edd-486a-81cc-f0e4497dd5fd 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.797 [2024-12-06 11:49:21.381589] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev bb865278-950f-438d-bc77-f47c62e171b8 is claimed 00:06:23.797 [2024-12-06 11:49:21.381685] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 1604b5bd-9edd-486a-81cc-f0e4497dd5fd is claimed 00:06:23.797 [2024-12-06 11:49:21.381809] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:23.797 [2024-12-06 11:49:21.381823] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:23.797 [2024-12-06 11:49:21.382108] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:23.797 [2024-12-06 11:49:21.382273] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:23.797 [2024-12-06 11:49:21.382286] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:23.797 [2024-12-06 11:49:21.382436] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.797 [2024-12-06 11:49:21.489656] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.797 [2024-12-06 11:49:21.537631] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:23.797 [2024-12-06 11:49:21.537716] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'bb865278-950f-438d-bc77-f47c62e171b8' was resized: old size 131072, new size 204800 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.797 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.797 [2024-12-06 11:49:21.549534] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:23.797 [2024-12-06 11:49:21.549563] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '1604b5bd-9edd-486a-81cc-f0e4497dd5fd' was resized: old size 131072, new size 204800 00:06:23.797 [2024-12-06 11:49:21.549593] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.058 [2024-12-06 11:49:21.637394] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.058 [2024-12-06 11:49:21.677127] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:24.058 [2024-12-06 11:49:21.677196] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:24.058 [2024-12-06 11:49:21.677221] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:24.058 [2024-12-06 11:49:21.677409] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:24.058 [2024-12-06 11:49:21.677583] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:24.058 [2024-12-06 11:49:21.677634] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:24.058 [2024-12-06 11:49:21.677646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.058 [2024-12-06 11:49:21.689059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:24.058 [2024-12-06 11:49:21.689109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:24.058 [2024-12-06 11:49:21.689143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:24.058 [2024-12-06 11:49:21.689155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:24.058 [2024-12-06 11:49:21.691200] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:24.058 [2024-12-06 11:49:21.691301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:24.058 [2024-12-06 11:49:21.692622] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev bb865278-950f-438d-bc77-f47c62e171b8 00:06:24.058 [2024-12-06 11:49:21.692672] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev bb865278-950f-438d-bc77-f47c62e171b8 is claimed 00:06:24.058 [2024-12-06 11:49:21.692764] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 1604b5bd-9edd-486a-81cc-f0e4497dd5fd 00:06:24.058 [2024-12-06 11:49:21.692784] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 1604b5bd-9edd-486a-81cc-f0e4497dd5fd is claimed 00:06:24.058 [2024-12-06 11:49:21.692859] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 1604b5bd-9edd-486a-81cc-f0e4497dd5fd (2) smaller than existing raid bdev Raid (3) 00:06:24.058 [2024-12-06 11:49:21.692877] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev bb865278-950f-438d-bc77-f47c62e171b8: File exists 00:06:24.058 [2024-12-06 11:49:21.692918] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:06:24.058 [2024-12-06 11:49:21.692926] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:24.058 [2024-12-06 11:49:21.693140] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:06:24.058 [2024-12-06 11:49:21.693292] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:06:24.058 [2024-12-06 11:49:21.693303] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001580 00:06:24.058 [2024-12-06 11:49:21.693440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:24.058 pt0 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.058 [2024-12-06 11:49:21.717586] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:24.058 11:49:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71403 00:06:24.059 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71403 ']' 00:06:24.059 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71403 00:06:24.059 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:24.059 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.059 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71403 00:06:24.059 killing process with pid 71403 00:06:24.059 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.059 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.059 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71403' 00:06:24.059 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71403 00:06:24.059 [2024-12-06 11:49:21.794183] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:24.059 [2024-12-06 11:49:21.794235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:24.059 [2024-12-06 11:49:21.794287] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:24.059 [2024-12-06 11:49:21.794296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Raid, state offline 00:06:24.059 11:49:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71403 00:06:24.319 [2024-12-06 11:49:21.953810] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:24.603 11:49:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:24.603 00:06:24.603 real 0m1.968s 00:06:24.603 user 0m2.235s 00:06:24.603 sys 0m0.483s 00:06:24.603 11:49:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.603 ************************************ 00:06:24.603 END TEST raid1_resize_superblock_test 00:06:24.603 ************************************ 00:06:24.603 11:49:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.603 11:49:22 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:24.603 11:49:22 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:24.603 11:49:22 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:24.603 11:49:22 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:24.603 11:49:22 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:24.603 11:49:22 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:24.603 11:49:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:24.603 11:49:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.603 11:49:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:24.603 ************************************ 00:06:24.603 START TEST raid_function_test_raid0 00:06:24.603 ************************************ 00:06:24.603 11:49:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:06:24.603 11:49:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:24.603 11:49:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:24.603 11:49:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:24.603 11:49:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=71478 00:06:24.603 11:49:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:24.603 11:49:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71478' 00:06:24.603 Process raid pid: 71478 00:06:24.603 11:49:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 71478 00:06:24.603 11:49:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 71478 ']' 00:06:24.603 11:49:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.603 11:49:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.603 11:49:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.603 11:49:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.603 11:49:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:24.901 [2024-12-06 11:49:22.364621] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:24.901 [2024-12-06 11:49:22.364825] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:24.901 [2024-12-06 11:49:22.490007] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.901 [2024-12-06 11:49:22.533669] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.901 [2024-12-06 11:49:22.576688] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:24.901 [2024-12-06 11:49:22.576795] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:25.472 11:49:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.472 11:49:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:06:25.472 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:25.472 11:49:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.472 11:49:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:25.472 Base_1 00:06:25.472 11:49:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.472 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:25.472 11:49:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.472 11:49:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:25.472 Base_2 00:06:25.472 11:49:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.472 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:25.472 11:49:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.472 11:49:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:25.472 [2024-12-06 11:49:23.217801] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:25.472 [2024-12-06 11:49:23.219634] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:25.472 [2024-12-06 11:49:23.219698] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:25.472 [2024-12-06 11:49:23.219716] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:25.472 [2024-12-06 11:49:23.219979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:25.472 [2024-12-06 11:49:23.220101] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:25.472 [2024-12-06 11:49:23.220115] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000001200 00:06:25.472 [2024-12-06 11:49:23.220276] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:25.472 11:49:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.472 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:25.472 11:49:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.472 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:25.472 11:49:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:25.732 11:49:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.732 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:25.732 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:25.732 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:25.732 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:25.732 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:25.732 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:25.732 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:25.732 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:25.732 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:25.732 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:25.732 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:25.732 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:25.732 [2024-12-06 11:49:23.449414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:06:25.732 /dev/nbd0 00:06:25.732 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:25.732 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:25.732 11:49:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:25.732 11:49:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:06:25.732 11:49:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:25.732 11:49:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:25.732 11:49:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:26.016 11:49:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:06:26.016 11:49:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:26.016 11:49:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:26.016 11:49:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:26.016 1+0 records in 00:06:26.016 1+0 records out 00:06:26.016 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000558151 s, 7.3 MB/s 00:06:26.016 11:49:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.016 11:49:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:06:26.016 11:49:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.016 11:49:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:26.016 11:49:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:06:26.016 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.016 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:26.016 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:26.016 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:26.016 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:26.016 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:26.016 { 00:06:26.016 "nbd_device": "/dev/nbd0", 00:06:26.016 "bdev_name": "raid" 00:06:26.016 } 00:06:26.016 ]' 00:06:26.016 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:26.016 { 00:06:26.016 "nbd_device": "/dev/nbd0", 00:06:26.016 "bdev_name": "raid" 00:06:26.016 } 00:06:26.016 ]' 00:06:26.016 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.275 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:26.275 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:26.275 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.275 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:26.275 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:26.275 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:26.275 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:26.275 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:26.275 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:26.275 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:26.275 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:26.275 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:26.275 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:26.275 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:26.275 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:26.275 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:26.275 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:26.275 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:26.275 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:26.275 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:26.275 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:26.275 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:26.275 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:26.275 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:26.275 4096+0 records in 00:06:26.275 4096+0 records out 00:06:26.275 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0373711 s, 56.1 MB/s 00:06:26.275 11:49:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:26.276 4096+0 records in 00:06:26.276 4096+0 records out 00:06:26.276 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.179442 s, 11.7 MB/s 00:06:26.276 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:26.276 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:26.535 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:26.535 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:26.535 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:26.535 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:26.535 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:26.535 128+0 records in 00:06:26.535 128+0 records out 00:06:26.535 65536 bytes (66 kB, 64 KiB) copied, 0.00119823 s, 54.7 MB/s 00:06:26.535 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:26.535 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:26.536 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:26.536 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:26.536 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:26.536 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:26.536 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:26.536 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:26.536 2035+0 records in 00:06:26.536 2035+0 records out 00:06:26.536 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0141863 s, 73.4 MB/s 00:06:26.536 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:26.536 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:26.536 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:26.536 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:26.536 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:26.536 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:26.536 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:26.536 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:26.536 456+0 records in 00:06:26.536 456+0 records out 00:06:26.536 233472 bytes (233 kB, 228 KiB) copied, 0.00386236 s, 60.4 MB/s 00:06:26.536 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:26.536 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:26.536 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:26.536 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:26.536 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:26.536 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:26.536 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:26.536 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:26.536 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:26.536 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:26.536 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:26.536 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.536 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:26.795 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:26.795 [2024-12-06 11:49:24.375673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:26.795 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:26.795 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:26.795 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.795 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.795 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:26.795 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:26.795 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.795 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:26.795 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:26.795 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:27.054 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.054 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.054 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.054 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:27.054 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:27.054 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.054 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:27.054 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:27.054 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:27.054 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:27.054 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:27.054 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 71478 00:06:27.054 11:49:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 71478 ']' 00:06:27.054 11:49:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 71478 00:06:27.054 11:49:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:06:27.054 11:49:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.054 11:49:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71478 00:06:27.054 killing process with pid 71478 00:06:27.054 11:49:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.054 11:49:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.055 11:49:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71478' 00:06:27.055 11:49:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 71478 00:06:27.055 [2024-12-06 11:49:24.661452] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:27.055 [2024-12-06 11:49:24.661558] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:27.055 [2024-12-06 11:49:24.661607] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:27.055 [2024-12-06 11:49:24.661618] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid, state offline 00:06:27.055 11:49:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 71478 00:06:27.055 [2024-12-06 11:49:24.684504] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:27.315 11:49:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:27.315 00:06:27.315 real 0m2.635s 00:06:27.315 user 0m3.214s 00:06:27.315 sys 0m0.924s 00:06:27.315 11:49:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.315 11:49:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:27.315 ************************************ 00:06:27.315 END TEST raid_function_test_raid0 00:06:27.315 ************************************ 00:06:27.315 11:49:24 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:27.315 11:49:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:27.315 11:49:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.315 11:49:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:27.315 ************************************ 00:06:27.315 START TEST raid_function_test_concat 00:06:27.315 ************************************ 00:06:27.315 11:49:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:06:27.315 11:49:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:27.315 11:49:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:27.315 11:49:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:27.315 11:49:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=71596 00:06:27.315 Process raid pid: 71596 00:06:27.315 11:49:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:27.315 11:49:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71596' 00:06:27.315 11:49:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 71596 00:06:27.315 11:49:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 71596 ']' 00:06:27.315 11:49:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.315 11:49:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.315 11:49:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.315 11:49:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.315 11:49:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:27.315 [2024-12-06 11:49:25.066251] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:27.315 [2024-12-06 11:49:25.066377] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:27.574 [2024-12-06 11:49:25.213302] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.574 [2024-12-06 11:49:25.257553] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.574 [2024-12-06 11:49:25.299709] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:27.574 [2024-12-06 11:49:25.299747] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:28.140 11:49:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.140 11:49:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:06:28.140 11:49:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:28.140 11:49:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.140 11:49:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:28.140 Base_1 00:06:28.140 11:49:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.140 11:49:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:28.140 11:49:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.140 11:49:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:28.399 Base_2 00:06:28.399 11:49:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.399 11:49:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:28.399 11:49:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.399 11:49:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:28.399 [2024-12-06 11:49:25.933361] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:28.399 [2024-12-06 11:49:25.937436] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:28.399 [2024-12-06 11:49:25.937542] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:28.399 [2024-12-06 11:49:25.937560] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:28.399 [2024-12-06 11:49:25.937978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:28.399 [2024-12-06 11:49:25.938183] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:28.399 [2024-12-06 11:49:25.938213] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000001200 00:06:28.399 [2024-12-06 11:49:25.938488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:28.399 11:49:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.399 11:49:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:28.399 11:49:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.399 11:49:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:28.399 11:49:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:28.399 11:49:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.399 11:49:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:28.399 11:49:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:28.399 11:49:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:28.399 11:49:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:28.399 11:49:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:28.399 11:49:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:28.399 11:49:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:28.399 11:49:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:28.399 11:49:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:28.399 11:49:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:28.399 11:49:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:28.399 11:49:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:28.658 [2024-12-06 11:49:26.169439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:06:28.658 /dev/nbd0 00:06:28.658 11:49:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:28.658 11:49:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:28.658 11:49:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:28.658 11:49:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:06:28.658 11:49:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:28.658 11:49:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:28.658 11:49:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:28.658 11:49:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:06:28.658 11:49:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:28.658 11:49:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:28.658 11:49:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:28.658 1+0 records in 00:06:28.658 1+0 records out 00:06:28.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331602 s, 12.4 MB/s 00:06:28.658 11:49:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.658 11:49:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:06:28.658 11:49:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.658 11:49:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:28.658 11:49:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:06:28.658 11:49:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.658 11:49:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:28.658 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:28.658 11:49:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:28.658 11:49:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:28.917 11:49:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:28.917 { 00:06:28.917 "nbd_device": "/dev/nbd0", 00:06:28.917 "bdev_name": "raid" 00:06:28.917 } 00:06:28.917 ]' 00:06:28.917 11:49:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:28.917 { 00:06:28.917 "nbd_device": "/dev/nbd0", 00:06:28.917 "bdev_name": "raid" 00:06:28.917 } 00:06:28.917 ]' 00:06:28.917 11:49:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.917 11:49:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:28.917 11:49:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:28.917 11:49:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.917 11:49:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:28.917 11:49:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:28.917 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:28.917 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:28.917 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:28.917 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:28.917 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:28.917 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:28.917 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:28.917 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:28.917 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:28.917 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:28.917 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:28.917 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:28.917 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:28.917 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:28.917 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:28.917 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:28.917 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:28.917 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:28.917 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:28.917 4096+0 records in 00:06:28.917 4096+0 records out 00:06:28.917 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0267333 s, 78.4 MB/s 00:06:28.917 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:29.176 4096+0 records in 00:06:29.176 4096+0 records out 00:06:29.176 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.220751 s, 9.5 MB/s 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:29.176 128+0 records in 00:06:29.176 128+0 records out 00:06:29.176 65536 bytes (66 kB, 64 KiB) copied, 0.000569018 s, 115 MB/s 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:29.176 2035+0 records in 00:06:29.176 2035+0 records out 00:06:29.176 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00611538 s, 170 MB/s 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:29.176 456+0 records in 00:06:29.176 456+0 records out 00:06:29.176 233472 bytes (233 kB, 228 KiB) copied, 0.00363923 s, 64.2 MB/s 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.176 11:49:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:29.435 11:49:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:29.435 [2024-12-06 11:49:27.037201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:29.435 11:49:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:29.435 11:49:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:29.435 11:49:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.435 11:49:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.435 11:49:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:29.435 11:49:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:29.435 11:49:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.435 11:49:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:29.435 11:49:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:29.435 11:49:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:29.694 11:49:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:29.694 11:49:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:29.694 11:49:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.694 11:49:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:29.694 11:49:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.694 11:49:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:29.694 11:49:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:29.694 11:49:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:29.694 11:49:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:29.694 11:49:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:29.694 11:49:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:29.694 11:49:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 71596 00:06:29.694 11:49:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 71596 ']' 00:06:29.694 11:49:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 71596 00:06:29.694 11:49:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:06:29.694 11:49:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:29.694 11:49:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71596 00:06:29.694 11:49:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:29.694 11:49:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:29.694 killing process with pid 71596 00:06:29.694 11:49:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71596' 00:06:29.694 11:49:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 71596 00:06:29.694 [2024-12-06 11:49:27.343647] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:29.694 [2024-12-06 11:49:27.343769] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:29.694 [2024-12-06 11:49:27.343830] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:29.694 [2024-12-06 11:49:27.343848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid, state offline 00:06:29.694 11:49:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 71596 00:06:29.694 [2024-12-06 11:49:27.366076] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:29.954 11:49:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:29.954 00:06:29.954 real 0m2.620s 00:06:29.954 user 0m3.194s 00:06:29.954 sys 0m0.840s 00:06:29.954 11:49:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.954 11:49:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:29.954 ************************************ 00:06:29.954 END TEST raid_function_test_concat 00:06:29.954 ************************************ 00:06:29.954 11:49:27 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:29.954 11:49:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:29.954 11:49:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.954 11:49:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:29.954 ************************************ 00:06:29.954 START TEST raid0_resize_test 00:06:29.954 ************************************ 00:06:29.954 11:49:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:06:29.954 11:49:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:06:29.954 11:49:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:29.954 11:49:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:29.954 11:49:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:29.954 11:49:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:29.954 11:49:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:29.954 11:49:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:29.954 11:49:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:29.954 11:49:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=71707 00:06:29.954 Process raid pid: 71707 00:06:29.954 11:49:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 71707' 00:06:29.954 11:49:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:29.954 11:49:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 71707 00:06:29.954 11:49:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 71707 ']' 00:06:29.954 11:49:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.954 11:49:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.954 11:49:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.954 11:49:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.954 11:49:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.214 [2024-12-06 11:49:27.771795] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:30.214 [2024-12-06 11:49:27.771933] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:30.214 [2024-12-06 11:49:27.920291] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.214 [2024-12-06 11:49:27.966096] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.473 [2024-12-06 11:49:28.008889] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:30.473 [2024-12-06 11:49:28.008932] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.042 Base_1 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.042 Base_2 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.042 [2024-12-06 11:49:28.602087] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:31.042 [2024-12-06 11:49:28.603874] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:31.042 [2024-12-06 11:49:28.603934] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:31.042 [2024-12-06 11:49:28.603952] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:31.042 [2024-12-06 11:49:28.604210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:06:31.042 [2024-12-06 11:49:28.604342] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:31.042 [2024-12-06 11:49:28.604355] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:31.042 [2024-12-06 11:49:28.604453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.042 [2024-12-06 11:49:28.614027] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:31.042 [2024-12-06 11:49:28.614055] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:31.042 true 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.042 [2024-12-06 11:49:28.630158] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.042 [2024-12-06 11:49:28.673914] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:31.042 [2024-12-06 11:49:28.673940] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:31.042 [2024-12-06 11:49:28.673963] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:31.042 true 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.042 [2024-12-06 11:49:28.690035] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 71707 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 71707 ']' 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 71707 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71707 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:31.042 killing process with pid 71707 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71707' 00:06:31.042 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 71707 00:06:31.042 [2024-12-06 11:49:28.776457] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:31.043 [2024-12-06 11:49:28.776533] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:31.043 [2024-12-06 11:49:28.776574] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:31.043 [2024-12-06 11:49:28.776583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:31.043 11:49:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 71707 00:06:31.043 [2024-12-06 11:49:28.778007] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:31.302 11:49:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:31.302 00:06:31.302 real 0m1.337s 00:06:31.302 user 0m1.476s 00:06:31.302 sys 0m0.323s 00:06:31.302 11:49:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.302 11:49:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.302 ************************************ 00:06:31.302 END TEST raid0_resize_test 00:06:31.302 ************************************ 00:06:31.562 11:49:29 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:06:31.562 11:49:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:31.562 11:49:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.562 11:49:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:31.562 ************************************ 00:06:31.562 START TEST raid1_resize_test 00:06:31.562 ************************************ 00:06:31.562 11:49:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:06:31.562 11:49:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:06:31.562 11:49:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:31.562 11:49:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:31.562 11:49:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:31.562 11:49:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:31.562 11:49:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:31.562 11:49:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:31.562 11:49:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:31.562 11:49:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=71757 00:06:31.562 11:49:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:31.562 11:49:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 71757' 00:06:31.562 Process raid pid: 71757 00:06:31.562 11:49:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 71757 00:06:31.562 11:49:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 71757 ']' 00:06:31.562 11:49:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.562 11:49:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.562 11:49:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.562 11:49:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.562 11:49:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.562 [2024-12-06 11:49:29.179442] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:31.562 [2024-12-06 11:49:29.179571] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.822 [2024-12-06 11:49:29.328323] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.822 [2024-12-06 11:49:29.372284] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.822 [2024-12-06 11:49:29.414038] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:31.822 [2024-12-06 11:49:29.414079] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:32.392 11:49:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.392 11:49:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:06:32.392 11:49:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:32.392 11:49:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.392 11:49:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.392 Base_1 00:06:32.392 11:49:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.392 11:49:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:32.392 11:49:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.392 11:49:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.392 Base_2 00:06:32.392 11:49:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.392 11:49:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:06:32.392 11:49:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:32.392 11:49:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.392 11:49:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.392 [2024-12-06 11:49:29.998690] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:32.392 [2024-12-06 11:49:30.000474] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:32.392 [2024-12-06 11:49:30.000534] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:32.392 [2024-12-06 11:49:30.000544] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:32.392 [2024-12-06 11:49:30.000797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:06:32.392 [2024-12-06 11:49:30.000922] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:32.392 [2024-12-06 11:49:30.000942] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:32.392 [2024-12-06 11:49:30.001056] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:32.392 11:49:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.392 11:49:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:32.392 11:49:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.392 11:49:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.392 [2024-12-06 11:49:30.010641] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:32.392 [2024-12-06 11:49:30.010668] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:32.392 true 00:06:32.392 11:49:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.392 11:49:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:32.392 11:49:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:32.392 11:49:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.392 11:49:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.392 [2024-12-06 11:49:30.026781] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:32.392 11:49:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.392 11:49:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:06:32.392 11:49:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:06:32.392 11:49:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:06:32.392 11:49:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:06:32.392 11:49:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:06:32.392 11:49:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:32.392 11:49:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.392 11:49:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.392 [2024-12-06 11:49:30.070526] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:32.392 [2024-12-06 11:49:30.070552] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:32.392 [2024-12-06 11:49:30.070576] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:32.392 true 00:06:32.392 11:49:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.393 11:49:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:32.393 11:49:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.393 11:49:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.393 11:49:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:32.393 [2024-12-06 11:49:30.082667] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:32.393 11:49:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.393 11:49:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:06:32.393 11:49:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:06:32.393 11:49:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:06:32.393 11:49:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:06:32.393 11:49:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:06:32.393 11:49:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 71757 00:06:32.393 11:49:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 71757 ']' 00:06:32.393 11:49:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 71757 00:06:32.393 11:49:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:06:32.393 11:49:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:32.393 11:49:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71757 00:06:32.653 11:49:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:32.653 11:49:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:32.653 killing process with pid 71757 00:06:32.653 11:49:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71757' 00:06:32.653 11:49:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 71757 00:06:32.653 [2024-12-06 11:49:30.169647] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:32.653 [2024-12-06 11:49:30.169733] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:32.653 11:49:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 71757 00:06:32.653 [2024-12-06 11:49:30.170110] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:32.653 [2024-12-06 11:49:30.170132] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:32.653 [2024-12-06 11:49:30.171244] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:32.653 11:49:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:32.653 00:06:32.653 real 0m1.318s 00:06:32.653 user 0m1.458s 00:06:32.653 sys 0m0.297s 00:06:32.653 11:49:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.653 11:49:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.653 ************************************ 00:06:32.653 END TEST raid1_resize_test 00:06:32.653 ************************************ 00:06:32.913 11:49:30 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:06:32.913 11:49:30 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:32.913 11:49:30 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:32.913 11:49:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:32.913 11:49:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.913 11:49:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:32.913 ************************************ 00:06:32.913 START TEST raid_state_function_test 00:06:32.913 ************************************ 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71804 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:32.913 Process raid pid: 71804 00:06:32.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71804' 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71804 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 71804 ']' 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.913 11:49:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.913 [2024-12-06 11:49:30.568981] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:32.913 [2024-12-06 11:49:30.569184] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:33.173 [2024-12-06 11:49:30.715515] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.173 [2024-12-06 11:49:30.759300] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.173 [2024-12-06 11:49:30.800794] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:33.173 [2024-12-06 11:49:30.800916] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:33.774 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.774 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:06:33.774 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:33.774 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.774 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.774 [2024-12-06 11:49:31.385635] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:33.774 [2024-12-06 11:49:31.385733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:33.774 [2024-12-06 11:49:31.385779] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:33.774 [2024-12-06 11:49:31.385803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:33.774 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.774 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:33.774 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:33.774 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:33.774 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:33.774 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:33.774 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:33.774 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:33.774 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:33.774 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:33.774 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:33.774 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:33.774 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:33.774 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.774 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.774 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.774 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:33.774 "name": "Existed_Raid", 00:06:33.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:33.774 "strip_size_kb": 64, 00:06:33.774 "state": "configuring", 00:06:33.774 "raid_level": "raid0", 00:06:33.774 "superblock": false, 00:06:33.774 "num_base_bdevs": 2, 00:06:33.774 "num_base_bdevs_discovered": 0, 00:06:33.774 "num_base_bdevs_operational": 2, 00:06:33.774 "base_bdevs_list": [ 00:06:33.774 { 00:06:33.774 "name": "BaseBdev1", 00:06:33.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:33.774 "is_configured": false, 00:06:33.774 "data_offset": 0, 00:06:33.774 "data_size": 0 00:06:33.774 }, 00:06:33.774 { 00:06:33.774 "name": "BaseBdev2", 00:06:33.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:33.774 "is_configured": false, 00:06:33.774 "data_offset": 0, 00:06:33.774 "data_size": 0 00:06:33.774 } 00:06:33.774 ] 00:06:33.774 }' 00:06:33.774 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:33.774 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.344 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:34.344 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.344 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.344 [2024-12-06 11:49:31.801045] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:34.344 [2024-12-06 11:49:31.801091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:06:34.344 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.344 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:34.344 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.344 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.344 [2024-12-06 11:49:31.813028] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:34.344 [2024-12-06 11:49:31.813070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:34.344 [2024-12-06 11:49:31.813103] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:34.344 [2024-12-06 11:49:31.813113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:34.344 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.344 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:34.344 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.344 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.344 [2024-12-06 11:49:31.833711] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:34.344 BaseBdev1 00:06:34.344 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.344 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:34.344 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:06:34.344 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:34.344 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:06:34.344 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:34.344 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:34.344 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:34.344 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.344 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.344 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.344 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:34.344 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.344 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.344 [ 00:06:34.344 { 00:06:34.345 "name": "BaseBdev1", 00:06:34.345 "aliases": [ 00:06:34.345 "236eee9f-b2de-4014-a69e-724205d46033" 00:06:34.345 ], 00:06:34.345 "product_name": "Malloc disk", 00:06:34.345 "block_size": 512, 00:06:34.345 "num_blocks": 65536, 00:06:34.345 "uuid": "236eee9f-b2de-4014-a69e-724205d46033", 00:06:34.345 "assigned_rate_limits": { 00:06:34.345 "rw_ios_per_sec": 0, 00:06:34.345 "rw_mbytes_per_sec": 0, 00:06:34.345 "r_mbytes_per_sec": 0, 00:06:34.345 "w_mbytes_per_sec": 0 00:06:34.345 }, 00:06:34.345 "claimed": true, 00:06:34.345 "claim_type": "exclusive_write", 00:06:34.345 "zoned": false, 00:06:34.345 "supported_io_types": { 00:06:34.345 "read": true, 00:06:34.345 "write": true, 00:06:34.345 "unmap": true, 00:06:34.345 "flush": true, 00:06:34.345 "reset": true, 00:06:34.345 "nvme_admin": false, 00:06:34.345 "nvme_io": false, 00:06:34.345 "nvme_io_md": false, 00:06:34.345 "write_zeroes": true, 00:06:34.345 "zcopy": true, 00:06:34.345 "get_zone_info": false, 00:06:34.345 "zone_management": false, 00:06:34.345 "zone_append": false, 00:06:34.345 "compare": false, 00:06:34.345 "compare_and_write": false, 00:06:34.345 "abort": true, 00:06:34.345 "seek_hole": false, 00:06:34.345 "seek_data": false, 00:06:34.345 "copy": true, 00:06:34.345 "nvme_iov_md": false 00:06:34.345 }, 00:06:34.345 "memory_domains": [ 00:06:34.345 { 00:06:34.345 "dma_device_id": "system", 00:06:34.345 "dma_device_type": 1 00:06:34.345 }, 00:06:34.345 { 00:06:34.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.345 "dma_device_type": 2 00:06:34.345 } 00:06:34.345 ], 00:06:34.345 "driver_specific": {} 00:06:34.345 } 00:06:34.345 ] 00:06:34.345 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.345 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:06:34.345 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:34.345 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:34.345 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:34.345 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:34.345 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:34.345 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:34.345 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:34.345 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:34.345 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:34.345 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:34.345 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:34.345 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:34.345 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.345 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.345 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.345 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:34.345 "name": "Existed_Raid", 00:06:34.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:34.345 "strip_size_kb": 64, 00:06:34.345 "state": "configuring", 00:06:34.345 "raid_level": "raid0", 00:06:34.345 "superblock": false, 00:06:34.345 "num_base_bdevs": 2, 00:06:34.345 "num_base_bdevs_discovered": 1, 00:06:34.345 "num_base_bdevs_operational": 2, 00:06:34.345 "base_bdevs_list": [ 00:06:34.345 { 00:06:34.345 "name": "BaseBdev1", 00:06:34.345 "uuid": "236eee9f-b2de-4014-a69e-724205d46033", 00:06:34.345 "is_configured": true, 00:06:34.345 "data_offset": 0, 00:06:34.345 "data_size": 65536 00:06:34.345 }, 00:06:34.345 { 00:06:34.345 "name": "BaseBdev2", 00:06:34.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:34.345 "is_configured": false, 00:06:34.345 "data_offset": 0, 00:06:34.345 "data_size": 0 00:06:34.345 } 00:06:34.345 ] 00:06:34.345 }' 00:06:34.345 11:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:34.345 11:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.605 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:34.605 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.605 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.605 [2024-12-06 11:49:32.340875] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:34.605 [2024-12-06 11:49:32.340916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:06:34.605 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.605 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:34.605 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.605 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.605 [2024-12-06 11:49:32.348910] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:34.605 [2024-12-06 11:49:32.350745] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:34.606 [2024-12-06 11:49:32.350815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:34.606 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.606 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:34.606 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:34.606 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:34.606 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:34.606 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:34.606 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:34.606 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:34.606 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:34.606 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:34.606 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:34.606 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:34.606 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:34.606 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:34.606 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.606 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.866 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:34.866 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.866 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:34.866 "name": "Existed_Raid", 00:06:34.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:34.866 "strip_size_kb": 64, 00:06:34.866 "state": "configuring", 00:06:34.866 "raid_level": "raid0", 00:06:34.866 "superblock": false, 00:06:34.866 "num_base_bdevs": 2, 00:06:34.866 "num_base_bdevs_discovered": 1, 00:06:34.866 "num_base_bdevs_operational": 2, 00:06:34.866 "base_bdevs_list": [ 00:06:34.866 { 00:06:34.866 "name": "BaseBdev1", 00:06:34.866 "uuid": "236eee9f-b2de-4014-a69e-724205d46033", 00:06:34.866 "is_configured": true, 00:06:34.866 "data_offset": 0, 00:06:34.866 "data_size": 65536 00:06:34.866 }, 00:06:34.866 { 00:06:34.866 "name": "BaseBdev2", 00:06:34.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:34.866 "is_configured": false, 00:06:34.866 "data_offset": 0, 00:06:34.866 "data_size": 0 00:06:34.866 } 00:06:34.866 ] 00:06:34.866 }' 00:06:34.866 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:34.866 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.127 [2024-12-06 11:49:32.814887] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:35.127 [2024-12-06 11:49:32.815006] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:35.127 [2024-12-06 11:49:32.815048] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:35.127 [2024-12-06 11:49:32.815395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:35.127 [2024-12-06 11:49:32.815609] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:35.127 [2024-12-06 11:49:32.815664] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:06:35.127 [2024-12-06 11:49:32.815941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:35.127 BaseBdev2 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.127 [ 00:06:35.127 { 00:06:35.127 "name": "BaseBdev2", 00:06:35.127 "aliases": [ 00:06:35.127 "7850d639-13be-45ea-a125-8ac09f027fc8" 00:06:35.127 ], 00:06:35.127 "product_name": "Malloc disk", 00:06:35.127 "block_size": 512, 00:06:35.127 "num_blocks": 65536, 00:06:35.127 "uuid": "7850d639-13be-45ea-a125-8ac09f027fc8", 00:06:35.127 "assigned_rate_limits": { 00:06:35.127 "rw_ios_per_sec": 0, 00:06:35.127 "rw_mbytes_per_sec": 0, 00:06:35.127 "r_mbytes_per_sec": 0, 00:06:35.127 "w_mbytes_per_sec": 0 00:06:35.127 }, 00:06:35.127 "claimed": true, 00:06:35.127 "claim_type": "exclusive_write", 00:06:35.127 "zoned": false, 00:06:35.127 "supported_io_types": { 00:06:35.127 "read": true, 00:06:35.127 "write": true, 00:06:35.127 "unmap": true, 00:06:35.127 "flush": true, 00:06:35.127 "reset": true, 00:06:35.127 "nvme_admin": false, 00:06:35.127 "nvme_io": false, 00:06:35.127 "nvme_io_md": false, 00:06:35.127 "write_zeroes": true, 00:06:35.127 "zcopy": true, 00:06:35.127 "get_zone_info": false, 00:06:35.127 "zone_management": false, 00:06:35.127 "zone_append": false, 00:06:35.127 "compare": false, 00:06:35.127 "compare_and_write": false, 00:06:35.127 "abort": true, 00:06:35.127 "seek_hole": false, 00:06:35.127 "seek_data": false, 00:06:35.127 "copy": true, 00:06:35.127 "nvme_iov_md": false 00:06:35.127 }, 00:06:35.127 "memory_domains": [ 00:06:35.127 { 00:06:35.127 "dma_device_id": "system", 00:06:35.127 "dma_device_type": 1 00:06:35.127 }, 00:06:35.127 { 00:06:35.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.127 "dma_device_type": 2 00:06:35.127 } 00:06:35.127 ], 00:06:35.127 "driver_specific": {} 00:06:35.127 } 00:06:35.127 ] 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.127 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.387 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:35.387 "name": "Existed_Raid", 00:06:35.387 "uuid": "c62af6cc-efd3-4133-93e0-2ea8f30e3ba0", 00:06:35.387 "strip_size_kb": 64, 00:06:35.387 "state": "online", 00:06:35.387 "raid_level": "raid0", 00:06:35.387 "superblock": false, 00:06:35.387 "num_base_bdevs": 2, 00:06:35.387 "num_base_bdevs_discovered": 2, 00:06:35.387 "num_base_bdevs_operational": 2, 00:06:35.387 "base_bdevs_list": [ 00:06:35.387 { 00:06:35.387 "name": "BaseBdev1", 00:06:35.387 "uuid": "236eee9f-b2de-4014-a69e-724205d46033", 00:06:35.387 "is_configured": true, 00:06:35.387 "data_offset": 0, 00:06:35.387 "data_size": 65536 00:06:35.387 }, 00:06:35.387 { 00:06:35.387 "name": "BaseBdev2", 00:06:35.387 "uuid": "7850d639-13be-45ea-a125-8ac09f027fc8", 00:06:35.387 "is_configured": true, 00:06:35.387 "data_offset": 0, 00:06:35.387 "data_size": 65536 00:06:35.387 } 00:06:35.387 ] 00:06:35.387 }' 00:06:35.387 11:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:35.387 11:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.648 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:35.648 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:35.648 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:35.648 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:35.648 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:35.648 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:35.648 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:35.648 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:35.648 11:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.648 11:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.648 [2024-12-06 11:49:33.266403] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:35.648 11:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.648 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:35.648 "name": "Existed_Raid", 00:06:35.648 "aliases": [ 00:06:35.648 "c62af6cc-efd3-4133-93e0-2ea8f30e3ba0" 00:06:35.648 ], 00:06:35.648 "product_name": "Raid Volume", 00:06:35.648 "block_size": 512, 00:06:35.648 "num_blocks": 131072, 00:06:35.648 "uuid": "c62af6cc-efd3-4133-93e0-2ea8f30e3ba0", 00:06:35.648 "assigned_rate_limits": { 00:06:35.648 "rw_ios_per_sec": 0, 00:06:35.648 "rw_mbytes_per_sec": 0, 00:06:35.648 "r_mbytes_per_sec": 0, 00:06:35.648 "w_mbytes_per_sec": 0 00:06:35.648 }, 00:06:35.648 "claimed": false, 00:06:35.648 "zoned": false, 00:06:35.648 "supported_io_types": { 00:06:35.648 "read": true, 00:06:35.648 "write": true, 00:06:35.648 "unmap": true, 00:06:35.648 "flush": true, 00:06:35.648 "reset": true, 00:06:35.648 "nvme_admin": false, 00:06:35.648 "nvme_io": false, 00:06:35.648 "nvme_io_md": false, 00:06:35.648 "write_zeroes": true, 00:06:35.648 "zcopy": false, 00:06:35.648 "get_zone_info": false, 00:06:35.648 "zone_management": false, 00:06:35.648 "zone_append": false, 00:06:35.648 "compare": false, 00:06:35.648 "compare_and_write": false, 00:06:35.648 "abort": false, 00:06:35.648 "seek_hole": false, 00:06:35.648 "seek_data": false, 00:06:35.648 "copy": false, 00:06:35.648 "nvme_iov_md": false 00:06:35.648 }, 00:06:35.648 "memory_domains": [ 00:06:35.648 { 00:06:35.648 "dma_device_id": "system", 00:06:35.648 "dma_device_type": 1 00:06:35.648 }, 00:06:35.648 { 00:06:35.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.648 "dma_device_type": 2 00:06:35.648 }, 00:06:35.648 { 00:06:35.648 "dma_device_id": "system", 00:06:35.648 "dma_device_type": 1 00:06:35.648 }, 00:06:35.648 { 00:06:35.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.648 "dma_device_type": 2 00:06:35.648 } 00:06:35.648 ], 00:06:35.648 "driver_specific": { 00:06:35.648 "raid": { 00:06:35.648 "uuid": "c62af6cc-efd3-4133-93e0-2ea8f30e3ba0", 00:06:35.648 "strip_size_kb": 64, 00:06:35.648 "state": "online", 00:06:35.648 "raid_level": "raid0", 00:06:35.648 "superblock": false, 00:06:35.648 "num_base_bdevs": 2, 00:06:35.648 "num_base_bdevs_discovered": 2, 00:06:35.648 "num_base_bdevs_operational": 2, 00:06:35.648 "base_bdevs_list": [ 00:06:35.648 { 00:06:35.648 "name": "BaseBdev1", 00:06:35.648 "uuid": "236eee9f-b2de-4014-a69e-724205d46033", 00:06:35.648 "is_configured": true, 00:06:35.648 "data_offset": 0, 00:06:35.648 "data_size": 65536 00:06:35.648 }, 00:06:35.648 { 00:06:35.648 "name": "BaseBdev2", 00:06:35.648 "uuid": "7850d639-13be-45ea-a125-8ac09f027fc8", 00:06:35.648 "is_configured": true, 00:06:35.648 "data_offset": 0, 00:06:35.648 "data_size": 65536 00:06:35.648 } 00:06:35.648 ] 00:06:35.648 } 00:06:35.648 } 00:06:35.648 }' 00:06:35.648 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:35.648 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:35.648 BaseBdev2' 00:06:35.648 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:35.648 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:35.648 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:35.648 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:35.648 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:35.648 11:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.648 11:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.908 [2024-12-06 11:49:33.497804] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:35.908 [2024-12-06 11:49:33.497832] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:35.908 [2024-12-06 11:49:33.497884] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.908 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:35.908 "name": "Existed_Raid", 00:06:35.908 "uuid": "c62af6cc-efd3-4133-93e0-2ea8f30e3ba0", 00:06:35.908 "strip_size_kb": 64, 00:06:35.908 "state": "offline", 00:06:35.908 "raid_level": "raid0", 00:06:35.908 "superblock": false, 00:06:35.908 "num_base_bdevs": 2, 00:06:35.909 "num_base_bdevs_discovered": 1, 00:06:35.909 "num_base_bdevs_operational": 1, 00:06:35.909 "base_bdevs_list": [ 00:06:35.909 { 00:06:35.909 "name": null, 00:06:35.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:35.909 "is_configured": false, 00:06:35.909 "data_offset": 0, 00:06:35.909 "data_size": 65536 00:06:35.909 }, 00:06:35.909 { 00:06:35.909 "name": "BaseBdev2", 00:06:35.909 "uuid": "7850d639-13be-45ea-a125-8ac09f027fc8", 00:06:35.909 "is_configured": true, 00:06:35.909 "data_offset": 0, 00:06:35.909 "data_size": 65536 00:06:35.909 } 00:06:35.909 ] 00:06:35.909 }' 00:06:35.909 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:35.909 11:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.478 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:36.478 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:36.478 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:36.478 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:36.478 11:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.478 11:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.478 11:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.478 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:36.478 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:36.478 11:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:36.478 11:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.478 11:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.478 [2024-12-06 11:49:34.004433] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:36.478 [2024-12-06 11:49:34.004485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:06:36.478 11:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.478 11:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:36.478 11:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:36.478 11:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:36.479 11:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:36.479 11:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.479 11:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.479 11:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.479 11:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:36.479 11:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:36.479 11:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:36.479 11:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71804 00:06:36.479 11:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 71804 ']' 00:06:36.479 11:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 71804 00:06:36.479 11:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:06:36.479 11:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:36.479 11:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71804 00:06:36.479 killing process with pid 71804 00:06:36.479 11:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:36.479 11:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:36.479 11:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71804' 00:06:36.479 11:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 71804 00:06:36.479 [2024-12-06 11:49:34.116281] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:36.479 11:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 71804 00:06:36.479 [2024-12-06 11:49:34.117244] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:36.748 ************************************ 00:06:36.748 END TEST raid_state_function_test 00:06:36.748 ************************************ 00:06:36.748 00:06:36.748 real 0m3.880s 00:06:36.748 user 0m6.147s 00:06:36.748 sys 0m0.733s 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.748 11:49:34 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:06:36.748 11:49:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:36.748 11:49:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.748 11:49:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:36.748 ************************************ 00:06:36.748 START TEST raid_state_function_test_sb 00:06:36.748 ************************************ 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:36.748 Process raid pid: 72046 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72046 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72046' 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72046 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 72046 ']' 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.748 11:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.023 [2024-12-06 11:49:34.513941] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:37.023 [2024-12-06 11:49:34.514154] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:37.023 [2024-12-06 11:49:34.659060] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.023 [2024-12-06 11:49:34.703414] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.023 [2024-12-06 11:49:34.746628] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:37.023 [2024-12-06 11:49:34.746743] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:37.591 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.591 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:06:37.591 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:37.591 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.591 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.591 [2024-12-06 11:49:35.340327] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:37.591 [2024-12-06 11:49:35.340374] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:37.591 [2024-12-06 11:49:35.340387] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:37.591 [2024-12-06 11:49:35.340397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:37.850 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.850 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:37.850 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:37.850 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:37.850 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:37.850 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:37.850 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:37.850 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:37.850 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:37.850 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:37.850 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:37.850 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:37.850 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:37.850 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.850 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.850 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.850 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:37.850 "name": "Existed_Raid", 00:06:37.850 "uuid": "4548203f-5dc3-4c8b-9046-cd55915fa2da", 00:06:37.850 "strip_size_kb": 64, 00:06:37.850 "state": "configuring", 00:06:37.850 "raid_level": "raid0", 00:06:37.850 "superblock": true, 00:06:37.850 "num_base_bdevs": 2, 00:06:37.850 "num_base_bdevs_discovered": 0, 00:06:37.850 "num_base_bdevs_operational": 2, 00:06:37.850 "base_bdevs_list": [ 00:06:37.850 { 00:06:37.850 "name": "BaseBdev1", 00:06:37.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:37.850 "is_configured": false, 00:06:37.850 "data_offset": 0, 00:06:37.850 "data_size": 0 00:06:37.850 }, 00:06:37.850 { 00:06:37.850 "name": "BaseBdev2", 00:06:37.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:37.850 "is_configured": false, 00:06:37.850 "data_offset": 0, 00:06:37.850 "data_size": 0 00:06:37.850 } 00:06:37.850 ] 00:06:37.850 }' 00:06:37.850 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:37.850 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.109 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:38.109 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.110 [2024-12-06 11:49:35.731427] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:38.110 [2024-12-06 11:49:35.731513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.110 [2024-12-06 11:49:35.743431] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:38.110 [2024-12-06 11:49:35.743508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:38.110 [2024-12-06 11:49:35.743544] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:38.110 [2024-12-06 11:49:35.743567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.110 [2024-12-06 11:49:35.764065] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:38.110 BaseBdev1 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.110 [ 00:06:38.110 { 00:06:38.110 "name": "BaseBdev1", 00:06:38.110 "aliases": [ 00:06:38.110 "db646954-aeb6-4b36-8355-02ac5a18717f" 00:06:38.110 ], 00:06:38.110 "product_name": "Malloc disk", 00:06:38.110 "block_size": 512, 00:06:38.110 "num_blocks": 65536, 00:06:38.110 "uuid": "db646954-aeb6-4b36-8355-02ac5a18717f", 00:06:38.110 "assigned_rate_limits": { 00:06:38.110 "rw_ios_per_sec": 0, 00:06:38.110 "rw_mbytes_per_sec": 0, 00:06:38.110 "r_mbytes_per_sec": 0, 00:06:38.110 "w_mbytes_per_sec": 0 00:06:38.110 }, 00:06:38.110 "claimed": true, 00:06:38.110 "claim_type": "exclusive_write", 00:06:38.110 "zoned": false, 00:06:38.110 "supported_io_types": { 00:06:38.110 "read": true, 00:06:38.110 "write": true, 00:06:38.110 "unmap": true, 00:06:38.110 "flush": true, 00:06:38.110 "reset": true, 00:06:38.110 "nvme_admin": false, 00:06:38.110 "nvme_io": false, 00:06:38.110 "nvme_io_md": false, 00:06:38.110 "write_zeroes": true, 00:06:38.110 "zcopy": true, 00:06:38.110 "get_zone_info": false, 00:06:38.110 "zone_management": false, 00:06:38.110 "zone_append": false, 00:06:38.110 "compare": false, 00:06:38.110 "compare_and_write": false, 00:06:38.110 "abort": true, 00:06:38.110 "seek_hole": false, 00:06:38.110 "seek_data": false, 00:06:38.110 "copy": true, 00:06:38.110 "nvme_iov_md": false 00:06:38.110 }, 00:06:38.110 "memory_domains": [ 00:06:38.110 { 00:06:38.110 "dma_device_id": "system", 00:06:38.110 "dma_device_type": 1 00:06:38.110 }, 00:06:38.110 { 00:06:38.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:38.110 "dma_device_type": 2 00:06:38.110 } 00:06:38.110 ], 00:06:38.110 "driver_specific": {} 00:06:38.110 } 00:06:38.110 ] 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:38.110 "name": "Existed_Raid", 00:06:38.110 "uuid": "2d28f171-a3ba-48eb-92f9-4900724af309", 00:06:38.110 "strip_size_kb": 64, 00:06:38.110 "state": "configuring", 00:06:38.110 "raid_level": "raid0", 00:06:38.110 "superblock": true, 00:06:38.110 "num_base_bdevs": 2, 00:06:38.110 "num_base_bdevs_discovered": 1, 00:06:38.110 "num_base_bdevs_operational": 2, 00:06:38.110 "base_bdevs_list": [ 00:06:38.110 { 00:06:38.110 "name": "BaseBdev1", 00:06:38.110 "uuid": "db646954-aeb6-4b36-8355-02ac5a18717f", 00:06:38.110 "is_configured": true, 00:06:38.110 "data_offset": 2048, 00:06:38.110 "data_size": 63488 00:06:38.110 }, 00:06:38.110 { 00:06:38.110 "name": "BaseBdev2", 00:06:38.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:38.110 "is_configured": false, 00:06:38.110 "data_offset": 0, 00:06:38.110 "data_size": 0 00:06:38.110 } 00:06:38.110 ] 00:06:38.110 }' 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:38.110 11:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.680 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:38.680 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.680 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.680 [2024-12-06 11:49:36.219329] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:38.680 [2024-12-06 11:49:36.219369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:06:38.680 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.680 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:38.680 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.680 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.680 [2024-12-06 11:49:36.231386] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:38.680 [2024-12-06 11:49:36.233135] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:38.680 [2024-12-06 11:49:36.233176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:38.680 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.680 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:38.680 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:38.680 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:38.680 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:38.680 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:38.680 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:38.680 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:38.680 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:38.680 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:38.680 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:38.680 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:38.680 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:38.680 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:38.680 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.680 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.680 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:38.680 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.680 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:38.680 "name": "Existed_Raid", 00:06:38.680 "uuid": "a4ec16ad-7eab-41d7-9dea-d51ce573bd78", 00:06:38.680 "strip_size_kb": 64, 00:06:38.680 "state": "configuring", 00:06:38.680 "raid_level": "raid0", 00:06:38.680 "superblock": true, 00:06:38.680 "num_base_bdevs": 2, 00:06:38.680 "num_base_bdevs_discovered": 1, 00:06:38.680 "num_base_bdevs_operational": 2, 00:06:38.680 "base_bdevs_list": [ 00:06:38.680 { 00:06:38.680 "name": "BaseBdev1", 00:06:38.680 "uuid": "db646954-aeb6-4b36-8355-02ac5a18717f", 00:06:38.680 "is_configured": true, 00:06:38.680 "data_offset": 2048, 00:06:38.680 "data_size": 63488 00:06:38.680 }, 00:06:38.680 { 00:06:38.680 "name": "BaseBdev2", 00:06:38.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:38.680 "is_configured": false, 00:06:38.680 "data_offset": 0, 00:06:38.680 "data_size": 0 00:06:38.680 } 00:06:38.680 ] 00:06:38.680 }' 00:06:38.680 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:38.680 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.940 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:38.940 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.940 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.940 [2024-12-06 11:49:36.685840] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:38.940 [2024-12-06 11:49:36.686178] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:38.940 [2024-12-06 11:49:36.686283] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:38.940 [2024-12-06 11:49:36.686708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:38.940 BaseBdev2 00:06:38.940 [2024-12-06 11:49:36.686965] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:38.940 [2024-12-06 11:49:36.687037] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:06:38.940 [2024-12-06 11:49:36.687279] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:38.940 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.940 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:38.940 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:06:38.940 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:38.940 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:06:38.940 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:38.940 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:38.940 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:38.940 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.940 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.200 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.200 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:39.200 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.200 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.200 [ 00:06:39.200 { 00:06:39.200 "name": "BaseBdev2", 00:06:39.200 "aliases": [ 00:06:39.200 "05112bcd-7f44-4326-85c1-f067fc773823" 00:06:39.200 ], 00:06:39.200 "product_name": "Malloc disk", 00:06:39.200 "block_size": 512, 00:06:39.200 "num_blocks": 65536, 00:06:39.200 "uuid": "05112bcd-7f44-4326-85c1-f067fc773823", 00:06:39.200 "assigned_rate_limits": { 00:06:39.200 "rw_ios_per_sec": 0, 00:06:39.200 "rw_mbytes_per_sec": 0, 00:06:39.200 "r_mbytes_per_sec": 0, 00:06:39.200 "w_mbytes_per_sec": 0 00:06:39.200 }, 00:06:39.200 "claimed": true, 00:06:39.200 "claim_type": "exclusive_write", 00:06:39.200 "zoned": false, 00:06:39.200 "supported_io_types": { 00:06:39.200 "read": true, 00:06:39.200 "write": true, 00:06:39.200 "unmap": true, 00:06:39.200 "flush": true, 00:06:39.200 "reset": true, 00:06:39.200 "nvme_admin": false, 00:06:39.200 "nvme_io": false, 00:06:39.200 "nvme_io_md": false, 00:06:39.200 "write_zeroes": true, 00:06:39.200 "zcopy": true, 00:06:39.200 "get_zone_info": false, 00:06:39.200 "zone_management": false, 00:06:39.200 "zone_append": false, 00:06:39.200 "compare": false, 00:06:39.200 "compare_and_write": false, 00:06:39.200 "abort": true, 00:06:39.200 "seek_hole": false, 00:06:39.200 "seek_data": false, 00:06:39.200 "copy": true, 00:06:39.200 "nvme_iov_md": false 00:06:39.200 }, 00:06:39.200 "memory_domains": [ 00:06:39.200 { 00:06:39.200 "dma_device_id": "system", 00:06:39.200 "dma_device_type": 1 00:06:39.200 }, 00:06:39.200 { 00:06:39.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:39.200 "dma_device_type": 2 00:06:39.200 } 00:06:39.200 ], 00:06:39.200 "driver_specific": {} 00:06:39.200 } 00:06:39.200 ] 00:06:39.200 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.200 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:06:39.200 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:39.200 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:39.200 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:39.200 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:39.200 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:39.200 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:39.200 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:39.200 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:39.200 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:39.200 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:39.200 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:39.200 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:39.200 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:39.200 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:39.200 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.200 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.200 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.200 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:39.200 "name": "Existed_Raid", 00:06:39.200 "uuid": "a4ec16ad-7eab-41d7-9dea-d51ce573bd78", 00:06:39.200 "strip_size_kb": 64, 00:06:39.200 "state": "online", 00:06:39.200 "raid_level": "raid0", 00:06:39.200 "superblock": true, 00:06:39.200 "num_base_bdevs": 2, 00:06:39.200 "num_base_bdevs_discovered": 2, 00:06:39.200 "num_base_bdevs_operational": 2, 00:06:39.200 "base_bdevs_list": [ 00:06:39.200 { 00:06:39.200 "name": "BaseBdev1", 00:06:39.200 "uuid": "db646954-aeb6-4b36-8355-02ac5a18717f", 00:06:39.201 "is_configured": true, 00:06:39.201 "data_offset": 2048, 00:06:39.201 "data_size": 63488 00:06:39.201 }, 00:06:39.201 { 00:06:39.201 "name": "BaseBdev2", 00:06:39.201 "uuid": "05112bcd-7f44-4326-85c1-f067fc773823", 00:06:39.201 "is_configured": true, 00:06:39.201 "data_offset": 2048, 00:06:39.201 "data_size": 63488 00:06:39.201 } 00:06:39.201 ] 00:06:39.201 }' 00:06:39.201 11:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:39.201 11:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.461 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:39.461 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:39.461 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:39.461 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:39.461 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:06:39.461 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:39.461 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:39.461 11:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.461 11:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.461 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:39.461 [2024-12-06 11:49:37.185295] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:39.461 11:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:39.722 "name": "Existed_Raid", 00:06:39.722 "aliases": [ 00:06:39.722 "a4ec16ad-7eab-41d7-9dea-d51ce573bd78" 00:06:39.722 ], 00:06:39.722 "product_name": "Raid Volume", 00:06:39.722 "block_size": 512, 00:06:39.722 "num_blocks": 126976, 00:06:39.722 "uuid": "a4ec16ad-7eab-41d7-9dea-d51ce573bd78", 00:06:39.722 "assigned_rate_limits": { 00:06:39.722 "rw_ios_per_sec": 0, 00:06:39.722 "rw_mbytes_per_sec": 0, 00:06:39.722 "r_mbytes_per_sec": 0, 00:06:39.722 "w_mbytes_per_sec": 0 00:06:39.722 }, 00:06:39.722 "claimed": false, 00:06:39.722 "zoned": false, 00:06:39.722 "supported_io_types": { 00:06:39.722 "read": true, 00:06:39.722 "write": true, 00:06:39.722 "unmap": true, 00:06:39.722 "flush": true, 00:06:39.722 "reset": true, 00:06:39.722 "nvme_admin": false, 00:06:39.722 "nvme_io": false, 00:06:39.722 "nvme_io_md": false, 00:06:39.722 "write_zeroes": true, 00:06:39.722 "zcopy": false, 00:06:39.722 "get_zone_info": false, 00:06:39.722 "zone_management": false, 00:06:39.722 "zone_append": false, 00:06:39.722 "compare": false, 00:06:39.722 "compare_and_write": false, 00:06:39.722 "abort": false, 00:06:39.722 "seek_hole": false, 00:06:39.722 "seek_data": false, 00:06:39.722 "copy": false, 00:06:39.722 "nvme_iov_md": false 00:06:39.722 }, 00:06:39.722 "memory_domains": [ 00:06:39.722 { 00:06:39.722 "dma_device_id": "system", 00:06:39.722 "dma_device_type": 1 00:06:39.722 }, 00:06:39.722 { 00:06:39.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:39.722 "dma_device_type": 2 00:06:39.722 }, 00:06:39.722 { 00:06:39.722 "dma_device_id": "system", 00:06:39.722 "dma_device_type": 1 00:06:39.722 }, 00:06:39.722 { 00:06:39.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:39.722 "dma_device_type": 2 00:06:39.722 } 00:06:39.722 ], 00:06:39.722 "driver_specific": { 00:06:39.722 "raid": { 00:06:39.722 "uuid": "a4ec16ad-7eab-41d7-9dea-d51ce573bd78", 00:06:39.722 "strip_size_kb": 64, 00:06:39.722 "state": "online", 00:06:39.722 "raid_level": "raid0", 00:06:39.722 "superblock": true, 00:06:39.722 "num_base_bdevs": 2, 00:06:39.722 "num_base_bdevs_discovered": 2, 00:06:39.722 "num_base_bdevs_operational": 2, 00:06:39.722 "base_bdevs_list": [ 00:06:39.722 { 00:06:39.722 "name": "BaseBdev1", 00:06:39.722 "uuid": "db646954-aeb6-4b36-8355-02ac5a18717f", 00:06:39.722 "is_configured": true, 00:06:39.722 "data_offset": 2048, 00:06:39.722 "data_size": 63488 00:06:39.722 }, 00:06:39.722 { 00:06:39.722 "name": "BaseBdev2", 00:06:39.722 "uuid": "05112bcd-7f44-4326-85c1-f067fc773823", 00:06:39.722 "is_configured": true, 00:06:39.722 "data_offset": 2048, 00:06:39.722 "data_size": 63488 00:06:39.722 } 00:06:39.722 ] 00:06:39.722 } 00:06:39.722 } 00:06:39.722 }' 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:39.722 BaseBdev2' 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.722 [2024-12-06 11:49:37.428672] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:39.722 [2024-12-06 11:49:37.428743] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:39.722 [2024-12-06 11:49:37.428797] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.722 11:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.981 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:39.981 "name": "Existed_Raid", 00:06:39.981 "uuid": "a4ec16ad-7eab-41d7-9dea-d51ce573bd78", 00:06:39.981 "strip_size_kb": 64, 00:06:39.981 "state": "offline", 00:06:39.982 "raid_level": "raid0", 00:06:39.982 "superblock": true, 00:06:39.982 "num_base_bdevs": 2, 00:06:39.982 "num_base_bdevs_discovered": 1, 00:06:39.982 "num_base_bdevs_operational": 1, 00:06:39.982 "base_bdevs_list": [ 00:06:39.982 { 00:06:39.982 "name": null, 00:06:39.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:39.982 "is_configured": false, 00:06:39.982 "data_offset": 0, 00:06:39.982 "data_size": 63488 00:06:39.982 }, 00:06:39.982 { 00:06:39.982 "name": "BaseBdev2", 00:06:39.982 "uuid": "05112bcd-7f44-4326-85c1-f067fc773823", 00:06:39.982 "is_configured": true, 00:06:39.982 "data_offset": 2048, 00:06:39.982 "data_size": 63488 00:06:39.982 } 00:06:39.982 ] 00:06:39.982 }' 00:06:39.982 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:39.982 11:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.242 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:40.242 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:40.242 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:40.242 11:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.242 11:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.242 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:40.242 11:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.242 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:40.242 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:40.242 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:40.242 11:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.242 11:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.242 [2024-12-06 11:49:37.931308] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:40.242 [2024-12-06 11:49:37.931357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:06:40.242 11:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.242 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:40.242 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:40.242 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:40.242 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:40.242 11:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.242 11:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.242 11:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.242 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:40.242 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:40.242 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:40.242 11:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72046 00:06:40.242 11:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 72046 ']' 00:06:40.242 11:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 72046 00:06:40.242 11:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:06:40.502 11:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.502 11:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72046 00:06:40.502 killing process with pid 72046 00:06:40.502 11:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.502 11:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.502 11:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72046' 00:06:40.502 11:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 72046 00:06:40.502 [2024-12-06 11:49:38.038905] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:40.502 11:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 72046 00:06:40.502 [2024-12-06 11:49:38.039916] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:40.762 ************************************ 00:06:40.762 END TEST raid_state_function_test_sb 00:06:40.762 ************************************ 00:06:40.762 11:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:06:40.762 00:06:40.762 real 0m3.849s 00:06:40.762 user 0m6.071s 00:06:40.762 sys 0m0.714s 00:06:40.762 11:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.762 11:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.762 11:49:38 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:06:40.762 11:49:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:40.762 11:49:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.762 11:49:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:40.762 ************************************ 00:06:40.762 START TEST raid_superblock_test 00:06:40.762 ************************************ 00:06:40.762 11:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:06:40.762 11:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:06:40.762 11:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:06:40.762 11:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:06:40.762 11:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:06:40.762 11:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:06:40.762 11:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:06:40.762 11:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:06:40.762 11:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:06:40.762 11:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:06:40.762 11:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:06:40.762 11:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:06:40.762 11:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:06:40.762 11:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:06:40.762 11:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:06:40.762 11:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:06:40.762 11:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:06:40.762 11:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72287 00:06:40.762 11:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:06:40.762 11:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72287 00:06:40.762 11:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72287 ']' 00:06:40.762 11:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.762 11:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.762 11:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.762 11:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.762 11:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.762 [2024-12-06 11:49:38.427765] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:40.762 [2024-12-06 11:49:38.427898] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72287 ] 00:06:41.022 [2024-12-06 11:49:38.572425] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.022 [2024-12-06 11:49:38.615930] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.022 [2024-12-06 11:49:38.657738] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:41.022 [2024-12-06 11:49:38.657772] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.597 malloc1 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.597 [2024-12-06 11:49:39.278967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:41.597 [2024-12-06 11:49:39.279086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:41.597 [2024-12-06 11:49:39.279136] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:06:41.597 [2024-12-06 11:49:39.279170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:41.597 [2024-12-06 11:49:39.281245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:41.597 [2024-12-06 11:49:39.281342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:41.597 pt1 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.597 malloc2 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.597 [2024-12-06 11:49:39.327134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:41.597 [2024-12-06 11:49:39.327270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:41.597 [2024-12-06 11:49:39.327310] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:41.597 [2024-12-06 11:49:39.327335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:41.597 [2024-12-06 11:49:39.332015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:41.597 [2024-12-06 11:49:39.332097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:41.597 pt2 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.597 [2024-12-06 11:49:39.340395] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:41.597 [2024-12-06 11:49:39.343261] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:41.597 [2024-12-06 11:49:39.343460] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:41.597 [2024-12-06 11:49:39.343484] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:41.597 [2024-12-06 11:49:39.343871] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:41.597 [2024-12-06 11:49:39.344068] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:41.597 [2024-12-06 11:49:39.344084] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:06:41.597 [2024-12-06 11:49:39.344435] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:41.597 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:41.598 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:41.856 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:41.856 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.856 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.856 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:41.856 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.856 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:41.856 "name": "raid_bdev1", 00:06:41.856 "uuid": "7cecc0cd-6802-44ba-a488-6e255481d498", 00:06:41.856 "strip_size_kb": 64, 00:06:41.856 "state": "online", 00:06:41.856 "raid_level": "raid0", 00:06:41.856 "superblock": true, 00:06:41.856 "num_base_bdevs": 2, 00:06:41.856 "num_base_bdevs_discovered": 2, 00:06:41.856 "num_base_bdevs_operational": 2, 00:06:41.856 "base_bdevs_list": [ 00:06:41.856 { 00:06:41.856 "name": "pt1", 00:06:41.856 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:41.856 "is_configured": true, 00:06:41.856 "data_offset": 2048, 00:06:41.856 "data_size": 63488 00:06:41.856 }, 00:06:41.856 { 00:06:41.856 "name": "pt2", 00:06:41.856 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:41.856 "is_configured": true, 00:06:41.856 "data_offset": 2048, 00:06:41.856 "data_size": 63488 00:06:41.856 } 00:06:41.856 ] 00:06:41.856 }' 00:06:41.856 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:41.856 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.115 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:06:42.115 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:42.115 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:42.115 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:42.115 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:42.115 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:42.115 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:42.115 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.115 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:42.115 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.115 [2024-12-06 11:49:39.763876] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:42.115 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.115 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:42.115 "name": "raid_bdev1", 00:06:42.115 "aliases": [ 00:06:42.115 "7cecc0cd-6802-44ba-a488-6e255481d498" 00:06:42.115 ], 00:06:42.115 "product_name": "Raid Volume", 00:06:42.115 "block_size": 512, 00:06:42.115 "num_blocks": 126976, 00:06:42.115 "uuid": "7cecc0cd-6802-44ba-a488-6e255481d498", 00:06:42.116 "assigned_rate_limits": { 00:06:42.116 "rw_ios_per_sec": 0, 00:06:42.116 "rw_mbytes_per_sec": 0, 00:06:42.116 "r_mbytes_per_sec": 0, 00:06:42.116 "w_mbytes_per_sec": 0 00:06:42.116 }, 00:06:42.116 "claimed": false, 00:06:42.116 "zoned": false, 00:06:42.116 "supported_io_types": { 00:06:42.116 "read": true, 00:06:42.116 "write": true, 00:06:42.116 "unmap": true, 00:06:42.116 "flush": true, 00:06:42.116 "reset": true, 00:06:42.116 "nvme_admin": false, 00:06:42.116 "nvme_io": false, 00:06:42.116 "nvme_io_md": false, 00:06:42.116 "write_zeroes": true, 00:06:42.116 "zcopy": false, 00:06:42.116 "get_zone_info": false, 00:06:42.116 "zone_management": false, 00:06:42.116 "zone_append": false, 00:06:42.116 "compare": false, 00:06:42.116 "compare_and_write": false, 00:06:42.116 "abort": false, 00:06:42.116 "seek_hole": false, 00:06:42.116 "seek_data": false, 00:06:42.116 "copy": false, 00:06:42.116 "nvme_iov_md": false 00:06:42.116 }, 00:06:42.116 "memory_domains": [ 00:06:42.116 { 00:06:42.116 "dma_device_id": "system", 00:06:42.116 "dma_device_type": 1 00:06:42.116 }, 00:06:42.116 { 00:06:42.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:42.116 "dma_device_type": 2 00:06:42.116 }, 00:06:42.116 { 00:06:42.116 "dma_device_id": "system", 00:06:42.116 "dma_device_type": 1 00:06:42.116 }, 00:06:42.116 { 00:06:42.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:42.116 "dma_device_type": 2 00:06:42.116 } 00:06:42.116 ], 00:06:42.116 "driver_specific": { 00:06:42.116 "raid": { 00:06:42.116 "uuid": "7cecc0cd-6802-44ba-a488-6e255481d498", 00:06:42.116 "strip_size_kb": 64, 00:06:42.116 "state": "online", 00:06:42.116 "raid_level": "raid0", 00:06:42.116 "superblock": true, 00:06:42.116 "num_base_bdevs": 2, 00:06:42.116 "num_base_bdevs_discovered": 2, 00:06:42.116 "num_base_bdevs_operational": 2, 00:06:42.116 "base_bdevs_list": [ 00:06:42.116 { 00:06:42.116 "name": "pt1", 00:06:42.116 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:42.116 "is_configured": true, 00:06:42.116 "data_offset": 2048, 00:06:42.116 "data_size": 63488 00:06:42.116 }, 00:06:42.116 { 00:06:42.116 "name": "pt2", 00:06:42.116 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:42.116 "is_configured": true, 00:06:42.116 "data_offset": 2048, 00:06:42.116 "data_size": 63488 00:06:42.116 } 00:06:42.116 ] 00:06:42.116 } 00:06:42.116 } 00:06:42.116 }' 00:06:42.116 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:42.116 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:42.116 pt2' 00:06:42.116 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:42.376 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:42.376 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:42.376 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:42.376 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:42.376 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.376 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.376 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.376 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:42.376 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:42.376 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:42.376 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:42.376 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:42.376 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.376 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.376 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.376 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:42.376 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:42.376 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:42.376 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:06:42.376 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.376 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.376 [2024-12-06 11:49:40.003415] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:42.376 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.376 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7cecc0cd-6802-44ba-a488-6e255481d498 00:06:42.376 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7cecc0cd-6802-44ba-a488-6e255481d498 ']' 00:06:42.376 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:42.376 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.376 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.376 [2024-12-06 11:49:40.051098] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:42.376 [2024-12-06 11:49:40.051175] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:42.376 [2024-12-06 11:49:40.051298] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:42.376 [2024-12-06 11:49:40.051379] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:42.376 [2024-12-06 11:49:40.051425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:06:42.376 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.376 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:06:42.376 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:42.376 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.376 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.376 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.376 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:06:42.376 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:06:42.376 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:42.376 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:06:42.376 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.376 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.376 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.376 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:42.376 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:06:42.377 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.377 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.377 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.377 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:06:42.377 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:06:42.377 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.377 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.637 [2024-12-06 11:49:40.178912] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:42.637 [2024-12-06 11:49:40.180812] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:42.637 [2024-12-06 11:49:40.180878] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:06:42.637 [2024-12-06 11:49:40.180933] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:06:42.637 [2024-12-06 11:49:40.180951] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:42.637 [2024-12-06 11:49:40.180960] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:06:42.637 request: 00:06:42.637 { 00:06:42.637 "name": "raid_bdev1", 00:06:42.637 "raid_level": "raid0", 00:06:42.637 "base_bdevs": [ 00:06:42.637 "malloc1", 00:06:42.637 "malloc2" 00:06:42.637 ], 00:06:42.637 "strip_size_kb": 64, 00:06:42.637 "superblock": false, 00:06:42.637 "method": "bdev_raid_create", 00:06:42.637 "req_id": 1 00:06:42.637 } 00:06:42.637 Got JSON-RPC error response 00:06:42.637 response: 00:06:42.637 { 00:06:42.637 "code": -17, 00:06:42.637 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:06:42.637 } 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.637 [2024-12-06 11:49:40.242754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:42.637 [2024-12-06 11:49:40.242811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:42.637 [2024-12-06 11:49:40.242834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:06:42.637 [2024-12-06 11:49:40.242842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:42.637 [2024-12-06 11:49:40.245022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:42.637 [2024-12-06 11:49:40.245059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:42.637 [2024-12-06 11:49:40.245131] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:06:42.637 [2024-12-06 11:49:40.245192] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:42.637 pt1 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:42.637 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:42.638 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:42.638 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:42.638 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.638 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.638 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.638 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:42.638 "name": "raid_bdev1", 00:06:42.638 "uuid": "7cecc0cd-6802-44ba-a488-6e255481d498", 00:06:42.638 "strip_size_kb": 64, 00:06:42.638 "state": "configuring", 00:06:42.638 "raid_level": "raid0", 00:06:42.638 "superblock": true, 00:06:42.638 "num_base_bdevs": 2, 00:06:42.638 "num_base_bdevs_discovered": 1, 00:06:42.638 "num_base_bdevs_operational": 2, 00:06:42.638 "base_bdevs_list": [ 00:06:42.638 { 00:06:42.638 "name": "pt1", 00:06:42.638 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:42.638 "is_configured": true, 00:06:42.638 "data_offset": 2048, 00:06:42.638 "data_size": 63488 00:06:42.638 }, 00:06:42.638 { 00:06:42.638 "name": null, 00:06:42.638 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:42.638 "is_configured": false, 00:06:42.638 "data_offset": 2048, 00:06:42.638 "data_size": 63488 00:06:42.638 } 00:06:42.638 ] 00:06:42.638 }' 00:06:42.638 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:42.638 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.207 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:06:43.207 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:06:43.207 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:43.207 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:43.207 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.207 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.207 [2024-12-06 11:49:40.678044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:43.207 [2024-12-06 11:49:40.678113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:43.207 [2024-12-06 11:49:40.678135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:43.207 [2024-12-06 11:49:40.678144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:43.207 [2024-12-06 11:49:40.678565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:43.207 [2024-12-06 11:49:40.678594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:43.207 [2024-12-06 11:49:40.678673] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:06:43.207 [2024-12-06 11:49:40.678697] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:43.207 [2024-12-06 11:49:40.678787] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:43.207 [2024-12-06 11:49:40.678795] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:43.207 [2024-12-06 11:49:40.679033] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:06:43.207 [2024-12-06 11:49:40.679149] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:43.207 [2024-12-06 11:49:40.679166] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:06:43.207 [2024-12-06 11:49:40.679282] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:43.207 pt2 00:06:43.207 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.207 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:06:43.207 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:43.207 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:43.207 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:43.207 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:43.207 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:43.207 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:43.207 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:43.207 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:43.207 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:43.207 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:43.207 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:43.207 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:43.207 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.207 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.207 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:43.207 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.207 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:43.207 "name": "raid_bdev1", 00:06:43.207 "uuid": "7cecc0cd-6802-44ba-a488-6e255481d498", 00:06:43.207 "strip_size_kb": 64, 00:06:43.207 "state": "online", 00:06:43.207 "raid_level": "raid0", 00:06:43.207 "superblock": true, 00:06:43.207 "num_base_bdevs": 2, 00:06:43.207 "num_base_bdevs_discovered": 2, 00:06:43.207 "num_base_bdevs_operational": 2, 00:06:43.207 "base_bdevs_list": [ 00:06:43.207 { 00:06:43.207 "name": "pt1", 00:06:43.207 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:43.207 "is_configured": true, 00:06:43.207 "data_offset": 2048, 00:06:43.207 "data_size": 63488 00:06:43.207 }, 00:06:43.207 { 00:06:43.207 "name": "pt2", 00:06:43.207 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:43.207 "is_configured": true, 00:06:43.207 "data_offset": 2048, 00:06:43.207 "data_size": 63488 00:06:43.207 } 00:06:43.207 ] 00:06:43.207 }' 00:06:43.207 11:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:43.207 11:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.467 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:06:43.467 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:43.467 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:43.467 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:43.467 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:43.467 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:43.467 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:43.467 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:43.467 11:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.467 11:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.467 [2024-12-06 11:49:41.065619] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:43.467 11:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.467 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:43.467 "name": "raid_bdev1", 00:06:43.467 "aliases": [ 00:06:43.467 "7cecc0cd-6802-44ba-a488-6e255481d498" 00:06:43.467 ], 00:06:43.467 "product_name": "Raid Volume", 00:06:43.467 "block_size": 512, 00:06:43.467 "num_blocks": 126976, 00:06:43.467 "uuid": "7cecc0cd-6802-44ba-a488-6e255481d498", 00:06:43.467 "assigned_rate_limits": { 00:06:43.467 "rw_ios_per_sec": 0, 00:06:43.467 "rw_mbytes_per_sec": 0, 00:06:43.467 "r_mbytes_per_sec": 0, 00:06:43.467 "w_mbytes_per_sec": 0 00:06:43.467 }, 00:06:43.467 "claimed": false, 00:06:43.467 "zoned": false, 00:06:43.467 "supported_io_types": { 00:06:43.467 "read": true, 00:06:43.467 "write": true, 00:06:43.467 "unmap": true, 00:06:43.467 "flush": true, 00:06:43.467 "reset": true, 00:06:43.467 "nvme_admin": false, 00:06:43.467 "nvme_io": false, 00:06:43.467 "nvme_io_md": false, 00:06:43.467 "write_zeroes": true, 00:06:43.467 "zcopy": false, 00:06:43.467 "get_zone_info": false, 00:06:43.467 "zone_management": false, 00:06:43.467 "zone_append": false, 00:06:43.467 "compare": false, 00:06:43.467 "compare_and_write": false, 00:06:43.467 "abort": false, 00:06:43.467 "seek_hole": false, 00:06:43.467 "seek_data": false, 00:06:43.467 "copy": false, 00:06:43.467 "nvme_iov_md": false 00:06:43.467 }, 00:06:43.467 "memory_domains": [ 00:06:43.467 { 00:06:43.467 "dma_device_id": "system", 00:06:43.467 "dma_device_type": 1 00:06:43.467 }, 00:06:43.467 { 00:06:43.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:43.467 "dma_device_type": 2 00:06:43.467 }, 00:06:43.467 { 00:06:43.467 "dma_device_id": "system", 00:06:43.467 "dma_device_type": 1 00:06:43.467 }, 00:06:43.467 { 00:06:43.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:43.467 "dma_device_type": 2 00:06:43.467 } 00:06:43.467 ], 00:06:43.467 "driver_specific": { 00:06:43.467 "raid": { 00:06:43.467 "uuid": "7cecc0cd-6802-44ba-a488-6e255481d498", 00:06:43.467 "strip_size_kb": 64, 00:06:43.467 "state": "online", 00:06:43.467 "raid_level": "raid0", 00:06:43.467 "superblock": true, 00:06:43.467 "num_base_bdevs": 2, 00:06:43.467 "num_base_bdevs_discovered": 2, 00:06:43.467 "num_base_bdevs_operational": 2, 00:06:43.467 "base_bdevs_list": [ 00:06:43.467 { 00:06:43.467 "name": "pt1", 00:06:43.467 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:43.467 "is_configured": true, 00:06:43.467 "data_offset": 2048, 00:06:43.467 "data_size": 63488 00:06:43.467 }, 00:06:43.467 { 00:06:43.467 "name": "pt2", 00:06:43.467 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:43.467 "is_configured": true, 00:06:43.467 "data_offset": 2048, 00:06:43.467 "data_size": 63488 00:06:43.467 } 00:06:43.467 ] 00:06:43.468 } 00:06:43.468 } 00:06:43.468 }' 00:06:43.468 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:43.468 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:43.468 pt2' 00:06:43.468 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:43.468 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:43.468 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:43.468 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:43.468 11:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.468 11:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.468 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:43.468 11:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:06:43.727 [2024-12-06 11:49:41.285249] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7cecc0cd-6802-44ba-a488-6e255481d498 '!=' 7cecc0cd-6802-44ba-a488-6e255481d498 ']' 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72287 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72287 ']' 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72287 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72287 00:06:43.727 killing process with pid 72287 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72287' 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72287 00:06:43.727 [2024-12-06 11:49:41.364861] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:43.727 [2024-12-06 11:49:41.364959] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:43.727 [2024-12-06 11:49:41.365008] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:43.727 [2024-12-06 11:49:41.365016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:06:43.727 11:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72287 00:06:43.727 [2024-12-06 11:49:41.387102] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:43.987 ************************************ 00:06:43.987 END TEST raid_superblock_test 00:06:43.987 ************************************ 00:06:43.987 11:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:06:43.987 00:06:43.987 real 0m3.284s 00:06:43.987 user 0m5.013s 00:06:43.987 sys 0m0.713s 00:06:43.987 11:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.987 11:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.987 11:49:41 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:06:43.987 11:49:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:43.987 11:49:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.987 11:49:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:43.987 ************************************ 00:06:43.987 START TEST raid_read_error_test 00:06:43.987 ************************************ 00:06:43.987 11:49:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:06:43.987 11:49:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nGH3ub4jVC 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72482 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72482 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 72482 ']' 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.988 11:49:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.247 [2024-12-06 11:49:41.796421] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:44.247 [2024-12-06 11:49:41.796554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72482 ] 00:06:44.247 [2024-12-06 11:49:41.940372] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.247 [2024-12-06 11:49:41.983945] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.506 [2024-12-06 11:49:42.025483] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:44.506 [2024-12-06 11:49:42.025519] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:45.075 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.075 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:06:45.075 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:45.075 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:45.075 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.075 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.075 BaseBdev1_malloc 00:06:45.075 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.076 true 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.076 [2024-12-06 11:49:42.642812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:45.076 [2024-12-06 11:49:42.642867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:45.076 [2024-12-06 11:49:42.642903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:06:45.076 [2024-12-06 11:49:42.642918] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:45.076 [2024-12-06 11:49:42.645011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:45.076 [2024-12-06 11:49:42.645044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:45.076 BaseBdev1 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.076 BaseBdev2_malloc 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.076 true 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.076 [2024-12-06 11:49:42.699717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:45.076 [2024-12-06 11:49:42.699799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:45.076 [2024-12-06 11:49:42.699832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:45.076 [2024-12-06 11:49:42.699847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:45.076 [2024-12-06 11:49:42.703176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:45.076 [2024-12-06 11:49:42.703216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:45.076 BaseBdev2 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.076 [2024-12-06 11:49:42.711767] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:45.076 [2024-12-06 11:49:42.713837] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:45.076 [2024-12-06 11:49:42.714005] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:45.076 [2024-12-06 11:49:42.714018] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:45.076 [2024-12-06 11:49:42.714302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:45.076 [2024-12-06 11:49:42.714436] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:45.076 [2024-12-06 11:49:42.714462] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:06:45.076 [2024-12-06 11:49:42.714585] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:45.076 "name": "raid_bdev1", 00:06:45.076 "uuid": "6ee20912-9d95-4a79-af57-c82a805e179c", 00:06:45.076 "strip_size_kb": 64, 00:06:45.076 "state": "online", 00:06:45.076 "raid_level": "raid0", 00:06:45.076 "superblock": true, 00:06:45.076 "num_base_bdevs": 2, 00:06:45.076 "num_base_bdevs_discovered": 2, 00:06:45.076 "num_base_bdevs_operational": 2, 00:06:45.076 "base_bdevs_list": [ 00:06:45.076 { 00:06:45.076 "name": "BaseBdev1", 00:06:45.076 "uuid": "6a0ab629-994d-50ec-b6ee-f144cfcd9c22", 00:06:45.076 "is_configured": true, 00:06:45.076 "data_offset": 2048, 00:06:45.076 "data_size": 63488 00:06:45.076 }, 00:06:45.076 { 00:06:45.076 "name": "BaseBdev2", 00:06:45.076 "uuid": "0f0cbd49-36f0-58a6-b6e6-ffc0759cc07a", 00:06:45.076 "is_configured": true, 00:06:45.076 "data_offset": 2048, 00:06:45.076 "data_size": 63488 00:06:45.076 } 00:06:45.076 ] 00:06:45.076 }' 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:45.076 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.645 11:49:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:45.645 11:49:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:45.645 [2024-12-06 11:49:43.247197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:06:46.583 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:06:46.583 11:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.583 11:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.583 11:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.583 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:46.583 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:06:46.583 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:06:46.583 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:46.583 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:46.583 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:46.583 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:46.583 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:46.583 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:46.583 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:46.583 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:46.583 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:46.583 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:46.583 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:46.583 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:46.583 11:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.583 11:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.583 11:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.583 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:46.583 "name": "raid_bdev1", 00:06:46.583 "uuid": "6ee20912-9d95-4a79-af57-c82a805e179c", 00:06:46.583 "strip_size_kb": 64, 00:06:46.583 "state": "online", 00:06:46.583 "raid_level": "raid0", 00:06:46.583 "superblock": true, 00:06:46.583 "num_base_bdevs": 2, 00:06:46.583 "num_base_bdevs_discovered": 2, 00:06:46.583 "num_base_bdevs_operational": 2, 00:06:46.583 "base_bdevs_list": [ 00:06:46.583 { 00:06:46.583 "name": "BaseBdev1", 00:06:46.583 "uuid": "6a0ab629-994d-50ec-b6ee-f144cfcd9c22", 00:06:46.583 "is_configured": true, 00:06:46.583 "data_offset": 2048, 00:06:46.583 "data_size": 63488 00:06:46.583 }, 00:06:46.583 { 00:06:46.583 "name": "BaseBdev2", 00:06:46.583 "uuid": "0f0cbd49-36f0-58a6-b6e6-ffc0759cc07a", 00:06:46.583 "is_configured": true, 00:06:46.583 "data_offset": 2048, 00:06:46.583 "data_size": 63488 00:06:46.583 } 00:06:46.583 ] 00:06:46.583 }' 00:06:46.583 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:46.583 11:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.153 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:47.153 11:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.153 11:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.153 [2024-12-06 11:49:44.667185] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:47.153 [2024-12-06 11:49:44.667221] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:47.153 [2024-12-06 11:49:44.669673] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:47.153 [2024-12-06 11:49:44.669718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:47.153 [2024-12-06 11:49:44.669752] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:47.153 [2024-12-06 11:49:44.669761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:06:47.153 { 00:06:47.153 "results": [ 00:06:47.153 { 00:06:47.153 "job": "raid_bdev1", 00:06:47.153 "core_mask": "0x1", 00:06:47.153 "workload": "randrw", 00:06:47.154 "percentage": 50, 00:06:47.154 "status": "finished", 00:06:47.154 "queue_depth": 1, 00:06:47.154 "io_size": 131072, 00:06:47.154 "runtime": 1.42102, 00:06:47.154 "iops": 18103.193480739188, 00:06:47.154 "mibps": 2262.8991850923985, 00:06:47.154 "io_failed": 1, 00:06:47.154 "io_timeout": 0, 00:06:47.154 "avg_latency_us": 76.29104866298414, 00:06:47.154 "min_latency_us": 24.593886462882097, 00:06:47.154 "max_latency_us": 1395.1441048034935 00:06:47.154 } 00:06:47.154 ], 00:06:47.154 "core_count": 1 00:06:47.154 } 00:06:47.154 11:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.154 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72482 00:06:47.154 11:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 72482 ']' 00:06:47.154 11:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 72482 00:06:47.154 11:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:06:47.154 11:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.154 11:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72482 00:06:47.154 11:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.154 11:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.154 killing process with pid 72482 00:06:47.154 11:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72482' 00:06:47.154 11:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 72482 00:06:47.154 [2024-12-06 11:49:44.714443] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:47.154 11:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 72482 00:06:47.154 [2024-12-06 11:49:44.729658] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:47.414 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nGH3ub4jVC 00:06:47.414 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:47.414 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:47.414 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:06:47.414 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:06:47.414 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:47.414 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:47.414 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:06:47.414 00:06:47.414 real 0m3.268s 00:06:47.414 user 0m4.167s 00:06:47.414 sys 0m0.510s 00:06:47.414 11:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.414 11:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.414 ************************************ 00:06:47.414 END TEST raid_read_error_test 00:06:47.414 ************************************ 00:06:47.414 11:49:45 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:06:47.414 11:49:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:47.414 11:49:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.414 11:49:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:47.414 ************************************ 00:06:47.414 START TEST raid_write_error_test 00:06:47.414 ************************************ 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WUVJdlX1RY 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72611 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72611 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 72611 ']' 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.414 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.414 [2024-12-06 11:49:45.136999] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:47.414 [2024-12-06 11:49:45.137131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72611 ] 00:06:47.673 [2024-12-06 11:49:45.261204] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.673 [2024-12-06 11:49:45.304008] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.673 [2024-12-06 11:49:45.345530] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:47.673 [2024-12-06 11:49:45.345571] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:48.243 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.243 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:06:48.243 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:48.243 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:48.243 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.243 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.243 BaseBdev1_malloc 00:06:48.243 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.243 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:48.243 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.243 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.243 true 00:06:48.243 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.243 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:48.243 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.243 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.243 [2024-12-06 11:49:45.978856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:48.243 [2024-12-06 11:49:45.978917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:48.243 [2024-12-06 11:49:45.978938] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:06:48.243 [2024-12-06 11:49:45.978947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:48.243 [2024-12-06 11:49:45.980985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:48.243 [2024-12-06 11:49:45.981019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:48.243 BaseBdev1 00:06:48.243 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.243 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:48.243 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:48.243 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.243 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.504 BaseBdev2_malloc 00:06:48.504 11:49:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.504 11:49:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:48.504 11:49:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.504 11:49:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.504 true 00:06:48.504 11:49:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.504 11:49:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:48.504 11:49:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.504 11:49:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.504 [2024-12-06 11:49:46.035602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:48.504 [2024-12-06 11:49:46.035675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:48.504 [2024-12-06 11:49:46.035707] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:48.504 [2024-12-06 11:49:46.035720] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:48.504 [2024-12-06 11:49:46.038600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:48.504 [2024-12-06 11:49:46.038633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:48.504 BaseBdev2 00:06:48.504 11:49:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.504 11:49:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:48.504 11:49:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.504 11:49:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.504 [2024-12-06 11:49:46.047626] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:48.504 [2024-12-06 11:49:46.049414] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:48.504 [2024-12-06 11:49:46.049612] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:48.504 [2024-12-06 11:49:46.049624] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:48.504 [2024-12-06 11:49:46.049855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:48.504 [2024-12-06 11:49:46.049979] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:48.504 [2024-12-06 11:49:46.049993] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:06:48.504 [2024-12-06 11:49:46.050133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:48.504 11:49:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.504 11:49:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:48.504 11:49:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:48.504 11:49:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:48.504 11:49:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:48.504 11:49:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:48.504 11:49:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:48.504 11:49:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:48.504 11:49:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:48.504 11:49:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:48.504 11:49:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:48.504 11:49:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:48.504 11:49:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.504 11:49:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.504 11:49:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:48.504 11:49:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.504 11:49:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:48.504 "name": "raid_bdev1", 00:06:48.504 "uuid": "5e9860a0-5592-4623-a04a-8fa3c871e4cb", 00:06:48.504 "strip_size_kb": 64, 00:06:48.504 "state": "online", 00:06:48.504 "raid_level": "raid0", 00:06:48.504 "superblock": true, 00:06:48.504 "num_base_bdevs": 2, 00:06:48.504 "num_base_bdevs_discovered": 2, 00:06:48.504 "num_base_bdevs_operational": 2, 00:06:48.504 "base_bdevs_list": [ 00:06:48.504 { 00:06:48.504 "name": "BaseBdev1", 00:06:48.504 "uuid": "5c5196b7-33b7-5795-8092-0186e3ff77fa", 00:06:48.504 "is_configured": true, 00:06:48.504 "data_offset": 2048, 00:06:48.504 "data_size": 63488 00:06:48.504 }, 00:06:48.504 { 00:06:48.504 "name": "BaseBdev2", 00:06:48.504 "uuid": "8d11f913-e5f8-59b0-a514-8cdace2052d7", 00:06:48.504 "is_configured": true, 00:06:48.504 "data_offset": 2048, 00:06:48.505 "data_size": 63488 00:06:48.505 } 00:06:48.505 ] 00:06:48.505 }' 00:06:48.505 11:49:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:48.505 11:49:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.763 11:49:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:48.763 11:49:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:49.023 [2024-12-06 11:49:46.595066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:06:49.967 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:06:49.967 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.967 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.967 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.967 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:49.967 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:06:49.967 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:06:49.967 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:49.967 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:49.967 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:49.967 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:49.967 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:49.967 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:49.967 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:49.967 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:49.967 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:49.967 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:49.967 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:49.967 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:49.967 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.967 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.967 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.967 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:49.967 "name": "raid_bdev1", 00:06:49.967 "uuid": "5e9860a0-5592-4623-a04a-8fa3c871e4cb", 00:06:49.967 "strip_size_kb": 64, 00:06:49.967 "state": "online", 00:06:49.967 "raid_level": "raid0", 00:06:49.967 "superblock": true, 00:06:49.967 "num_base_bdevs": 2, 00:06:49.967 "num_base_bdevs_discovered": 2, 00:06:49.967 "num_base_bdevs_operational": 2, 00:06:49.967 "base_bdevs_list": [ 00:06:49.967 { 00:06:49.967 "name": "BaseBdev1", 00:06:49.967 "uuid": "5c5196b7-33b7-5795-8092-0186e3ff77fa", 00:06:49.967 "is_configured": true, 00:06:49.967 "data_offset": 2048, 00:06:49.967 "data_size": 63488 00:06:49.967 }, 00:06:49.967 { 00:06:49.967 "name": "BaseBdev2", 00:06:49.967 "uuid": "8d11f913-e5f8-59b0-a514-8cdace2052d7", 00:06:49.967 "is_configured": true, 00:06:49.967 "data_offset": 2048, 00:06:49.967 "data_size": 63488 00:06:49.967 } 00:06:49.967 ] 00:06:49.967 }' 00:06:49.967 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:49.967 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.536 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:50.536 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.536 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.536 [2024-12-06 11:49:47.991010] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:50.536 [2024-12-06 11:49:47.991050] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:50.536 [2024-12-06 11:49:47.993502] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:50.536 [2024-12-06 11:49:47.993568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:50.536 [2024-12-06 11:49:47.993603] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:50.536 [2024-12-06 11:49:47.993618] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:06:50.536 { 00:06:50.536 "results": [ 00:06:50.536 { 00:06:50.536 "job": "raid_bdev1", 00:06:50.536 "core_mask": "0x1", 00:06:50.536 "workload": "randrw", 00:06:50.536 "percentage": 50, 00:06:50.536 "status": "finished", 00:06:50.536 "queue_depth": 1, 00:06:50.536 "io_size": 131072, 00:06:50.536 "runtime": 1.396899, 00:06:50.536 "iops": 18173.82645416741, 00:06:50.536 "mibps": 2271.7283067709263, 00:06:50.536 "io_failed": 1, 00:06:50.536 "io_timeout": 0, 00:06:50.536 "avg_latency_us": 76.16620773972231, 00:06:50.536 "min_latency_us": 24.370305676855896, 00:06:50.536 "max_latency_us": 1488.1537117903931 00:06:50.536 } 00:06:50.536 ], 00:06:50.536 "core_count": 1 00:06:50.536 } 00:06:50.536 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.536 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72611 00:06:50.536 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 72611 ']' 00:06:50.536 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 72611 00:06:50.536 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:06:50.536 11:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.536 11:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72611 00:06:50.536 11:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:50.536 11:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:50.536 11:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72611' 00:06:50.536 killing process with pid 72611 00:06:50.536 11:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 72611 00:06:50.536 [2024-12-06 11:49:48.035732] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:50.536 11:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 72611 00:06:50.536 [2024-12-06 11:49:48.051038] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:50.536 11:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WUVJdlX1RY 00:06:50.536 11:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:50.536 11:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:50.536 11:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:06:50.536 11:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:06:50.795 11:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:50.795 11:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:50.795 11:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:06:50.795 00:06:50.795 real 0m3.253s 00:06:50.795 user 0m4.155s 00:06:50.795 sys 0m0.487s 00:06:50.795 11:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.795 11:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.795 ************************************ 00:06:50.795 END TEST raid_write_error_test 00:06:50.795 ************************************ 00:06:50.795 11:49:48 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:50.795 11:49:48 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:06:50.795 11:49:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:50.795 11:49:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.795 11:49:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:50.795 ************************************ 00:06:50.795 START TEST raid_state_function_test 00:06:50.795 ************************************ 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72738 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:50.795 Process raid pid: 72738 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72738' 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72738 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 72738 ']' 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.795 11:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.795 [2024-12-06 11:49:48.455429] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:50.795 [2024-12-06 11:49:48.455560] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.054 [2024-12-06 11:49:48.600347] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.054 [2024-12-06 11:49:48.646794] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.054 [2024-12-06 11:49:48.688367] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.054 [2024-12-06 11:49:48.688408] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.621 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.621 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:06:51.622 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:51.622 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.622 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.622 [2024-12-06 11:49:49.277241] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:51.622 [2024-12-06 11:49:49.277302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:51.622 [2024-12-06 11:49:49.277330] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:51.622 [2024-12-06 11:49:49.277339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:51.622 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.622 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:51.622 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:51.622 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:51.622 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:51.622 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:51.622 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:51.622 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:51.622 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:51.622 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:51.622 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:51.622 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:51.622 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:51.622 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.622 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.622 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.622 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:51.622 "name": "Existed_Raid", 00:06:51.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:51.622 "strip_size_kb": 64, 00:06:51.622 "state": "configuring", 00:06:51.622 "raid_level": "concat", 00:06:51.622 "superblock": false, 00:06:51.622 "num_base_bdevs": 2, 00:06:51.622 "num_base_bdevs_discovered": 0, 00:06:51.622 "num_base_bdevs_operational": 2, 00:06:51.622 "base_bdevs_list": [ 00:06:51.622 { 00:06:51.622 "name": "BaseBdev1", 00:06:51.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:51.622 "is_configured": false, 00:06:51.622 "data_offset": 0, 00:06:51.622 "data_size": 0 00:06:51.622 }, 00:06:51.622 { 00:06:51.622 "name": "BaseBdev2", 00:06:51.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:51.622 "is_configured": false, 00:06:51.622 "data_offset": 0, 00:06:51.622 "data_size": 0 00:06:51.622 } 00:06:51.622 ] 00:06:51.622 }' 00:06:51.622 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:51.622 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.190 [2024-12-06 11:49:49.744254] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:52.190 [2024-12-06 11:49:49.744298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.190 [2024-12-06 11:49:49.752244] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:52.190 [2024-12-06 11:49:49.752279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:52.190 [2024-12-06 11:49:49.752296] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:52.190 [2024-12-06 11:49:49.752306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.190 [2024-12-06 11:49:49.768999] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:52.190 BaseBdev1 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.190 [ 00:06:52.190 { 00:06:52.190 "name": "BaseBdev1", 00:06:52.190 "aliases": [ 00:06:52.190 "f8af33a3-abc8-46fb-aef8-eb59f9d57c9b" 00:06:52.190 ], 00:06:52.190 "product_name": "Malloc disk", 00:06:52.190 "block_size": 512, 00:06:52.190 "num_blocks": 65536, 00:06:52.190 "uuid": "f8af33a3-abc8-46fb-aef8-eb59f9d57c9b", 00:06:52.190 "assigned_rate_limits": { 00:06:52.190 "rw_ios_per_sec": 0, 00:06:52.190 "rw_mbytes_per_sec": 0, 00:06:52.190 "r_mbytes_per_sec": 0, 00:06:52.190 "w_mbytes_per_sec": 0 00:06:52.190 }, 00:06:52.190 "claimed": true, 00:06:52.190 "claim_type": "exclusive_write", 00:06:52.190 "zoned": false, 00:06:52.190 "supported_io_types": { 00:06:52.190 "read": true, 00:06:52.190 "write": true, 00:06:52.190 "unmap": true, 00:06:52.190 "flush": true, 00:06:52.190 "reset": true, 00:06:52.190 "nvme_admin": false, 00:06:52.190 "nvme_io": false, 00:06:52.190 "nvme_io_md": false, 00:06:52.190 "write_zeroes": true, 00:06:52.190 "zcopy": true, 00:06:52.190 "get_zone_info": false, 00:06:52.190 "zone_management": false, 00:06:52.190 "zone_append": false, 00:06:52.190 "compare": false, 00:06:52.190 "compare_and_write": false, 00:06:52.190 "abort": true, 00:06:52.190 "seek_hole": false, 00:06:52.190 "seek_data": false, 00:06:52.190 "copy": true, 00:06:52.190 "nvme_iov_md": false 00:06:52.190 }, 00:06:52.190 "memory_domains": [ 00:06:52.190 { 00:06:52.190 "dma_device_id": "system", 00:06:52.190 "dma_device_type": 1 00:06:52.190 }, 00:06:52.190 { 00:06:52.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.190 "dma_device_type": 2 00:06:52.190 } 00:06:52.190 ], 00:06:52.190 "driver_specific": {} 00:06:52.190 } 00:06:52.190 ] 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:52.190 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:52.191 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:52.191 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.191 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.191 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.191 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:52.191 "name": "Existed_Raid", 00:06:52.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:52.191 "strip_size_kb": 64, 00:06:52.191 "state": "configuring", 00:06:52.191 "raid_level": "concat", 00:06:52.191 "superblock": false, 00:06:52.191 "num_base_bdevs": 2, 00:06:52.191 "num_base_bdevs_discovered": 1, 00:06:52.191 "num_base_bdevs_operational": 2, 00:06:52.191 "base_bdevs_list": [ 00:06:52.191 { 00:06:52.191 "name": "BaseBdev1", 00:06:52.191 "uuid": "f8af33a3-abc8-46fb-aef8-eb59f9d57c9b", 00:06:52.191 "is_configured": true, 00:06:52.191 "data_offset": 0, 00:06:52.191 "data_size": 65536 00:06:52.191 }, 00:06:52.191 { 00:06:52.191 "name": "BaseBdev2", 00:06:52.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:52.191 "is_configured": false, 00:06:52.191 "data_offset": 0, 00:06:52.191 "data_size": 0 00:06:52.191 } 00:06:52.191 ] 00:06:52.191 }' 00:06:52.191 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:52.191 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.759 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:52.759 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.759 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.759 [2024-12-06 11:49:50.268230] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:52.759 [2024-12-06 11:49:50.268293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:06:52.759 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.759 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:52.759 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.759 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.759 [2024-12-06 11:49:50.280258] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:52.759 [2024-12-06 11:49:50.282131] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:52.759 [2024-12-06 11:49:50.282168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:52.759 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.759 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:52.759 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:52.759 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:52.759 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:52.759 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:52.759 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:52.759 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:52.759 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:52.759 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:52.759 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:52.759 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:52.759 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:52.759 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:52.759 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:52.759 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.760 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.760 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.760 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:52.760 "name": "Existed_Raid", 00:06:52.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:52.760 "strip_size_kb": 64, 00:06:52.760 "state": "configuring", 00:06:52.760 "raid_level": "concat", 00:06:52.760 "superblock": false, 00:06:52.760 "num_base_bdevs": 2, 00:06:52.760 "num_base_bdevs_discovered": 1, 00:06:52.760 "num_base_bdevs_operational": 2, 00:06:52.760 "base_bdevs_list": [ 00:06:52.760 { 00:06:52.760 "name": "BaseBdev1", 00:06:52.760 "uuid": "f8af33a3-abc8-46fb-aef8-eb59f9d57c9b", 00:06:52.760 "is_configured": true, 00:06:52.760 "data_offset": 0, 00:06:52.760 "data_size": 65536 00:06:52.760 }, 00:06:52.760 { 00:06:52.760 "name": "BaseBdev2", 00:06:52.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:52.760 "is_configured": false, 00:06:52.760 "data_offset": 0, 00:06:52.760 "data_size": 0 00:06:52.760 } 00:06:52.760 ] 00:06:52.760 }' 00:06:52.760 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:52.760 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.020 [2024-12-06 11:49:50.730843] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:53.020 [2024-12-06 11:49:50.730894] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:53.020 [2024-12-06 11:49:50.730915] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:53.020 [2024-12-06 11:49:50.731267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:53.020 [2024-12-06 11:49:50.731443] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:53.020 [2024-12-06 11:49:50.731472] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:06:53.020 [2024-12-06 11:49:50.731709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:53.020 BaseBdev2 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.020 [ 00:06:53.020 { 00:06:53.020 "name": "BaseBdev2", 00:06:53.020 "aliases": [ 00:06:53.020 "3b9855e3-19e7-492f-9ac1-65aeb5255a61" 00:06:53.020 ], 00:06:53.020 "product_name": "Malloc disk", 00:06:53.020 "block_size": 512, 00:06:53.020 "num_blocks": 65536, 00:06:53.020 "uuid": "3b9855e3-19e7-492f-9ac1-65aeb5255a61", 00:06:53.020 "assigned_rate_limits": { 00:06:53.020 "rw_ios_per_sec": 0, 00:06:53.020 "rw_mbytes_per_sec": 0, 00:06:53.020 "r_mbytes_per_sec": 0, 00:06:53.020 "w_mbytes_per_sec": 0 00:06:53.020 }, 00:06:53.020 "claimed": true, 00:06:53.020 "claim_type": "exclusive_write", 00:06:53.020 "zoned": false, 00:06:53.020 "supported_io_types": { 00:06:53.020 "read": true, 00:06:53.020 "write": true, 00:06:53.020 "unmap": true, 00:06:53.020 "flush": true, 00:06:53.020 "reset": true, 00:06:53.020 "nvme_admin": false, 00:06:53.020 "nvme_io": false, 00:06:53.020 "nvme_io_md": false, 00:06:53.020 "write_zeroes": true, 00:06:53.020 "zcopy": true, 00:06:53.020 "get_zone_info": false, 00:06:53.020 "zone_management": false, 00:06:53.020 "zone_append": false, 00:06:53.020 "compare": false, 00:06:53.020 "compare_and_write": false, 00:06:53.020 "abort": true, 00:06:53.020 "seek_hole": false, 00:06:53.020 "seek_data": false, 00:06:53.020 "copy": true, 00:06:53.020 "nvme_iov_md": false 00:06:53.020 }, 00:06:53.020 "memory_domains": [ 00:06:53.020 { 00:06:53.020 "dma_device_id": "system", 00:06:53.020 "dma_device_type": 1 00:06:53.020 }, 00:06:53.020 { 00:06:53.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:53.020 "dma_device_type": 2 00:06:53.020 } 00:06:53.020 ], 00:06:53.020 "driver_specific": {} 00:06:53.020 } 00:06:53.020 ] 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.020 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.280 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.280 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:53.280 "name": "Existed_Raid", 00:06:53.280 "uuid": "ecba48e0-b444-4c60-a48f-f23a406d7fda", 00:06:53.280 "strip_size_kb": 64, 00:06:53.280 "state": "online", 00:06:53.280 "raid_level": "concat", 00:06:53.280 "superblock": false, 00:06:53.280 "num_base_bdevs": 2, 00:06:53.280 "num_base_bdevs_discovered": 2, 00:06:53.280 "num_base_bdevs_operational": 2, 00:06:53.280 "base_bdevs_list": [ 00:06:53.280 { 00:06:53.280 "name": "BaseBdev1", 00:06:53.280 "uuid": "f8af33a3-abc8-46fb-aef8-eb59f9d57c9b", 00:06:53.280 "is_configured": true, 00:06:53.280 "data_offset": 0, 00:06:53.280 "data_size": 65536 00:06:53.280 }, 00:06:53.280 { 00:06:53.280 "name": "BaseBdev2", 00:06:53.280 "uuid": "3b9855e3-19e7-492f-9ac1-65aeb5255a61", 00:06:53.280 "is_configured": true, 00:06:53.280 "data_offset": 0, 00:06:53.280 "data_size": 65536 00:06:53.280 } 00:06:53.280 ] 00:06:53.280 }' 00:06:53.280 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:53.280 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.540 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:53.540 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:53.540 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:53.540 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:53.540 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:53.540 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:53.540 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:53.540 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.540 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.540 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:53.540 [2024-12-06 11:49:51.202348] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:53.540 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.540 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:53.540 "name": "Existed_Raid", 00:06:53.540 "aliases": [ 00:06:53.540 "ecba48e0-b444-4c60-a48f-f23a406d7fda" 00:06:53.540 ], 00:06:53.540 "product_name": "Raid Volume", 00:06:53.540 "block_size": 512, 00:06:53.540 "num_blocks": 131072, 00:06:53.540 "uuid": "ecba48e0-b444-4c60-a48f-f23a406d7fda", 00:06:53.540 "assigned_rate_limits": { 00:06:53.540 "rw_ios_per_sec": 0, 00:06:53.540 "rw_mbytes_per_sec": 0, 00:06:53.540 "r_mbytes_per_sec": 0, 00:06:53.541 "w_mbytes_per_sec": 0 00:06:53.541 }, 00:06:53.541 "claimed": false, 00:06:53.541 "zoned": false, 00:06:53.541 "supported_io_types": { 00:06:53.541 "read": true, 00:06:53.541 "write": true, 00:06:53.541 "unmap": true, 00:06:53.541 "flush": true, 00:06:53.541 "reset": true, 00:06:53.541 "nvme_admin": false, 00:06:53.541 "nvme_io": false, 00:06:53.541 "nvme_io_md": false, 00:06:53.541 "write_zeroes": true, 00:06:53.541 "zcopy": false, 00:06:53.541 "get_zone_info": false, 00:06:53.541 "zone_management": false, 00:06:53.541 "zone_append": false, 00:06:53.541 "compare": false, 00:06:53.541 "compare_and_write": false, 00:06:53.541 "abort": false, 00:06:53.541 "seek_hole": false, 00:06:53.541 "seek_data": false, 00:06:53.541 "copy": false, 00:06:53.541 "nvme_iov_md": false 00:06:53.541 }, 00:06:53.541 "memory_domains": [ 00:06:53.541 { 00:06:53.541 "dma_device_id": "system", 00:06:53.541 "dma_device_type": 1 00:06:53.541 }, 00:06:53.541 { 00:06:53.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:53.541 "dma_device_type": 2 00:06:53.541 }, 00:06:53.541 { 00:06:53.541 "dma_device_id": "system", 00:06:53.541 "dma_device_type": 1 00:06:53.541 }, 00:06:53.541 { 00:06:53.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:53.541 "dma_device_type": 2 00:06:53.541 } 00:06:53.541 ], 00:06:53.541 "driver_specific": { 00:06:53.541 "raid": { 00:06:53.541 "uuid": "ecba48e0-b444-4c60-a48f-f23a406d7fda", 00:06:53.541 "strip_size_kb": 64, 00:06:53.541 "state": "online", 00:06:53.541 "raid_level": "concat", 00:06:53.541 "superblock": false, 00:06:53.541 "num_base_bdevs": 2, 00:06:53.541 "num_base_bdevs_discovered": 2, 00:06:53.541 "num_base_bdevs_operational": 2, 00:06:53.541 "base_bdevs_list": [ 00:06:53.541 { 00:06:53.541 "name": "BaseBdev1", 00:06:53.541 "uuid": "f8af33a3-abc8-46fb-aef8-eb59f9d57c9b", 00:06:53.541 "is_configured": true, 00:06:53.541 "data_offset": 0, 00:06:53.541 "data_size": 65536 00:06:53.541 }, 00:06:53.541 { 00:06:53.541 "name": "BaseBdev2", 00:06:53.541 "uuid": "3b9855e3-19e7-492f-9ac1-65aeb5255a61", 00:06:53.541 "is_configured": true, 00:06:53.541 "data_offset": 0, 00:06:53.541 "data_size": 65536 00:06:53.541 } 00:06:53.541 ] 00:06:53.541 } 00:06:53.541 } 00:06:53.541 }' 00:06:53.541 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:53.541 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:53.541 BaseBdev2' 00:06:53.541 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.801 [2024-12-06 11:49:51.385794] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:53.801 [2024-12-06 11:49:51.385829] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:53.801 [2024-12-06 11:49:51.385873] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:53.801 "name": "Existed_Raid", 00:06:53.801 "uuid": "ecba48e0-b444-4c60-a48f-f23a406d7fda", 00:06:53.801 "strip_size_kb": 64, 00:06:53.801 "state": "offline", 00:06:53.801 "raid_level": "concat", 00:06:53.801 "superblock": false, 00:06:53.801 "num_base_bdevs": 2, 00:06:53.801 "num_base_bdevs_discovered": 1, 00:06:53.801 "num_base_bdevs_operational": 1, 00:06:53.801 "base_bdevs_list": [ 00:06:53.801 { 00:06:53.801 "name": null, 00:06:53.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:53.801 "is_configured": false, 00:06:53.801 "data_offset": 0, 00:06:53.801 "data_size": 65536 00:06:53.801 }, 00:06:53.801 { 00:06:53.801 "name": "BaseBdev2", 00:06:53.801 "uuid": "3b9855e3-19e7-492f-9ac1-65aeb5255a61", 00:06:53.801 "is_configured": true, 00:06:53.801 "data_offset": 0, 00:06:53.801 "data_size": 65536 00:06:53.801 } 00:06:53.801 ] 00:06:53.801 }' 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:53.801 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.061 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:54.061 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:54.061 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:54.061 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:54.061 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.061 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.061 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.320 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:54.320 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:54.320 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:54.320 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.320 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.320 [2024-12-06 11:49:51.824328] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:54.320 [2024-12-06 11:49:51.824398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:06:54.320 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.320 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:54.320 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:54.320 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:54.320 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:54.320 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.320 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.320 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.320 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:54.320 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:54.320 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:54.320 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72738 00:06:54.320 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 72738 ']' 00:06:54.320 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 72738 00:06:54.320 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:06:54.320 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.320 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72738 00:06:54.320 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.320 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.320 killing process with pid 72738 00:06:54.320 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72738' 00:06:54.321 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 72738 00:06:54.321 [2024-12-06 11:49:51.935069] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:54.321 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 72738 00:06:54.321 [2024-12-06 11:49:51.936035] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:54.580 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:54.581 00:06:54.581 real 0m3.811s 00:06:54.581 user 0m5.999s 00:06:54.581 sys 0m0.753s 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.581 ************************************ 00:06:54.581 END TEST raid_state_function_test 00:06:54.581 ************************************ 00:06:54.581 11:49:52 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:06:54.581 11:49:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:54.581 11:49:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.581 11:49:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:54.581 ************************************ 00:06:54.581 START TEST raid_state_function_test_sb 00:06:54.581 ************************************ 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72980 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:54.581 Process raid pid: 72980 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72980' 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72980 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 72980 ']' 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.581 11:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.581 [2024-12-06 11:49:52.332775] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:54.581 [2024-12-06 11:49:52.332898] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.841 [2024-12-06 11:49:52.478459] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.841 [2024-12-06 11:49:52.522844] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.841 [2024-12-06 11:49:52.564505] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.841 [2024-12-06 11:49:52.564545] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.779 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.779 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:06:55.779 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:55.779 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.779 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.779 [2024-12-06 11:49:53.177437] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:55.779 [2024-12-06 11:49:53.177501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:55.779 [2024-12-06 11:49:53.177512] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:55.779 [2024-12-06 11:49:53.177522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:55.779 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.779 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:55.779 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:55.779 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:55.779 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:55.779 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:55.779 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:55.779 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:55.779 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:55.780 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:55.780 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:55.780 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:55.780 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.780 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.780 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.780 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.780 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:55.780 "name": "Existed_Raid", 00:06:55.780 "uuid": "d16f72ad-3cb1-4439-8ffa-8c04cae9f39d", 00:06:55.780 "strip_size_kb": 64, 00:06:55.780 "state": "configuring", 00:06:55.780 "raid_level": "concat", 00:06:55.780 "superblock": true, 00:06:55.780 "num_base_bdevs": 2, 00:06:55.780 "num_base_bdevs_discovered": 0, 00:06:55.780 "num_base_bdevs_operational": 2, 00:06:55.780 "base_bdevs_list": [ 00:06:55.780 { 00:06:55.780 "name": "BaseBdev1", 00:06:55.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.780 "is_configured": false, 00:06:55.780 "data_offset": 0, 00:06:55.780 "data_size": 0 00:06:55.780 }, 00:06:55.780 { 00:06:55.780 "name": "BaseBdev2", 00:06:55.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.780 "is_configured": false, 00:06:55.780 "data_offset": 0, 00:06:55.780 "data_size": 0 00:06:55.780 } 00:06:55.780 ] 00:06:55.780 }' 00:06:55.780 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:55.780 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.040 [2024-12-06 11:49:53.576616] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:56.040 [2024-12-06 11:49:53.576666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.040 [2024-12-06 11:49:53.588610] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:56.040 [2024-12-06 11:49:53.588667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:56.040 [2024-12-06 11:49:53.588683] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:56.040 [2024-12-06 11:49:53.588693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.040 [2024-12-06 11:49:53.609244] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:56.040 BaseBdev1 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.040 [ 00:06:56.040 { 00:06:56.040 "name": "BaseBdev1", 00:06:56.040 "aliases": [ 00:06:56.040 "8cce7fa2-3e9e-4e8d-b740-8b63e8ecdf92" 00:06:56.040 ], 00:06:56.040 "product_name": "Malloc disk", 00:06:56.040 "block_size": 512, 00:06:56.040 "num_blocks": 65536, 00:06:56.040 "uuid": "8cce7fa2-3e9e-4e8d-b740-8b63e8ecdf92", 00:06:56.040 "assigned_rate_limits": { 00:06:56.040 "rw_ios_per_sec": 0, 00:06:56.040 "rw_mbytes_per_sec": 0, 00:06:56.040 "r_mbytes_per_sec": 0, 00:06:56.040 "w_mbytes_per_sec": 0 00:06:56.040 }, 00:06:56.040 "claimed": true, 00:06:56.040 "claim_type": "exclusive_write", 00:06:56.040 "zoned": false, 00:06:56.040 "supported_io_types": { 00:06:56.040 "read": true, 00:06:56.040 "write": true, 00:06:56.040 "unmap": true, 00:06:56.040 "flush": true, 00:06:56.040 "reset": true, 00:06:56.040 "nvme_admin": false, 00:06:56.040 "nvme_io": false, 00:06:56.040 "nvme_io_md": false, 00:06:56.040 "write_zeroes": true, 00:06:56.040 "zcopy": true, 00:06:56.040 "get_zone_info": false, 00:06:56.040 "zone_management": false, 00:06:56.040 "zone_append": false, 00:06:56.040 "compare": false, 00:06:56.040 "compare_and_write": false, 00:06:56.040 "abort": true, 00:06:56.040 "seek_hole": false, 00:06:56.040 "seek_data": false, 00:06:56.040 "copy": true, 00:06:56.040 "nvme_iov_md": false 00:06:56.040 }, 00:06:56.040 "memory_domains": [ 00:06:56.040 { 00:06:56.040 "dma_device_id": "system", 00:06:56.040 "dma_device_type": 1 00:06:56.040 }, 00:06:56.040 { 00:06:56.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.040 "dma_device_type": 2 00:06:56.040 } 00:06:56.040 ], 00:06:56.040 "driver_specific": {} 00:06:56.040 } 00:06:56.040 ] 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:56.040 "name": "Existed_Raid", 00:06:56.040 "uuid": "06e352ef-3e6e-46da-8f91-6d118b460078", 00:06:56.040 "strip_size_kb": 64, 00:06:56.040 "state": "configuring", 00:06:56.040 "raid_level": "concat", 00:06:56.040 "superblock": true, 00:06:56.040 "num_base_bdevs": 2, 00:06:56.040 "num_base_bdevs_discovered": 1, 00:06:56.040 "num_base_bdevs_operational": 2, 00:06:56.040 "base_bdevs_list": [ 00:06:56.040 { 00:06:56.040 "name": "BaseBdev1", 00:06:56.040 "uuid": "8cce7fa2-3e9e-4e8d-b740-8b63e8ecdf92", 00:06:56.040 "is_configured": true, 00:06:56.040 "data_offset": 2048, 00:06:56.040 "data_size": 63488 00:06:56.040 }, 00:06:56.040 { 00:06:56.040 "name": "BaseBdev2", 00:06:56.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:56.040 "is_configured": false, 00:06:56.040 "data_offset": 0, 00:06:56.040 "data_size": 0 00:06:56.040 } 00:06:56.040 ] 00:06:56.040 }' 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:56.040 11:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.610 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:56.610 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.610 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.610 [2024-12-06 11:49:54.108396] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:56.610 [2024-12-06 11:49:54.108459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:06:56.610 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.610 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:56.610 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.610 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.610 [2024-12-06 11:49:54.120424] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:56.610 [2024-12-06 11:49:54.122250] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:56.610 [2024-12-06 11:49:54.122296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:56.610 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.610 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:56.610 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:56.610 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:56.610 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:56.610 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:56.610 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:56.610 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:56.610 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:56.610 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:56.610 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:56.610 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:56.610 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:56.610 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:56.610 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.610 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.610 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.610 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.610 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:56.610 "name": "Existed_Raid", 00:06:56.610 "uuid": "1930c632-efe6-475f-ade5-591f065363a2", 00:06:56.610 "strip_size_kb": 64, 00:06:56.610 "state": "configuring", 00:06:56.610 "raid_level": "concat", 00:06:56.610 "superblock": true, 00:06:56.610 "num_base_bdevs": 2, 00:06:56.610 "num_base_bdevs_discovered": 1, 00:06:56.610 "num_base_bdevs_operational": 2, 00:06:56.610 "base_bdevs_list": [ 00:06:56.610 { 00:06:56.610 "name": "BaseBdev1", 00:06:56.610 "uuid": "8cce7fa2-3e9e-4e8d-b740-8b63e8ecdf92", 00:06:56.610 "is_configured": true, 00:06:56.610 "data_offset": 2048, 00:06:56.610 "data_size": 63488 00:06:56.610 }, 00:06:56.610 { 00:06:56.610 "name": "BaseBdev2", 00:06:56.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:56.610 "is_configured": false, 00:06:56.610 "data_offset": 0, 00:06:56.610 "data_size": 0 00:06:56.610 } 00:06:56.610 ] 00:06:56.610 }' 00:06:56.610 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:56.610 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.870 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:56.870 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.870 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.870 [2024-12-06 11:49:54.586899] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:56.870 [2024-12-06 11:49:54.587495] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:56.870 [2024-12-06 11:49:54.587590] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:56.870 BaseBdev2 00:06:56.870 [2024-12-06 11:49:54.588556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:56.870 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.870 [2024-12-06 11:49:54.589079] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:56.870 [2024-12-06 11:49:54.589157] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:06:56.870 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:56.870 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:06:56.870 [2024-12-06 11:49:54.589608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:56.870 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:56.870 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:06:56.870 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:56.870 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:56.870 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:56.870 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.870 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.870 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.870 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:56.870 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.870 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.870 [ 00:06:56.870 { 00:06:56.870 "name": "BaseBdev2", 00:06:56.870 "aliases": [ 00:06:56.870 "5e22b1e4-5ab7-459f-90bf-b47634e9df8f" 00:06:56.870 ], 00:06:56.870 "product_name": "Malloc disk", 00:06:56.870 "block_size": 512, 00:06:56.870 "num_blocks": 65536, 00:06:56.870 "uuid": "5e22b1e4-5ab7-459f-90bf-b47634e9df8f", 00:06:56.870 "assigned_rate_limits": { 00:06:56.870 "rw_ios_per_sec": 0, 00:06:56.870 "rw_mbytes_per_sec": 0, 00:06:56.870 "r_mbytes_per_sec": 0, 00:06:56.870 "w_mbytes_per_sec": 0 00:06:56.870 }, 00:06:56.870 "claimed": true, 00:06:56.870 "claim_type": "exclusive_write", 00:06:56.870 "zoned": false, 00:06:56.870 "supported_io_types": { 00:06:56.870 "read": true, 00:06:56.870 "write": true, 00:06:56.870 "unmap": true, 00:06:56.870 "flush": true, 00:06:56.870 "reset": true, 00:06:56.870 "nvme_admin": false, 00:06:56.870 "nvme_io": false, 00:06:56.870 "nvme_io_md": false, 00:06:56.870 "write_zeroes": true, 00:06:56.870 "zcopy": true, 00:06:56.870 "get_zone_info": false, 00:06:56.870 "zone_management": false, 00:06:56.870 "zone_append": false, 00:06:56.870 "compare": false, 00:06:56.870 "compare_and_write": false, 00:06:56.870 "abort": true, 00:06:56.870 "seek_hole": false, 00:06:56.870 "seek_data": false, 00:06:56.870 "copy": true, 00:06:56.870 "nvme_iov_md": false 00:06:56.870 }, 00:06:56.870 "memory_domains": [ 00:06:56.870 { 00:06:56.870 "dma_device_id": "system", 00:06:57.130 "dma_device_type": 1 00:06:57.130 }, 00:06:57.130 { 00:06:57.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.130 "dma_device_type": 2 00:06:57.130 } 00:06:57.130 ], 00:06:57.130 "driver_specific": {} 00:06:57.130 } 00:06:57.130 ] 00:06:57.130 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.130 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:06:57.130 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:57.130 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:57.130 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:06:57.130 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:57.130 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:57.130 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:57.130 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:57.130 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:57.130 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:57.130 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:57.130 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:57.130 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:57.130 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.130 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:57.130 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.130 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.130 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.130 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:57.130 "name": "Existed_Raid", 00:06:57.130 "uuid": "1930c632-efe6-475f-ade5-591f065363a2", 00:06:57.130 "strip_size_kb": 64, 00:06:57.130 "state": "online", 00:06:57.130 "raid_level": "concat", 00:06:57.130 "superblock": true, 00:06:57.130 "num_base_bdevs": 2, 00:06:57.130 "num_base_bdevs_discovered": 2, 00:06:57.130 "num_base_bdevs_operational": 2, 00:06:57.130 "base_bdevs_list": [ 00:06:57.130 { 00:06:57.130 "name": "BaseBdev1", 00:06:57.130 "uuid": "8cce7fa2-3e9e-4e8d-b740-8b63e8ecdf92", 00:06:57.130 "is_configured": true, 00:06:57.130 "data_offset": 2048, 00:06:57.130 "data_size": 63488 00:06:57.130 }, 00:06:57.130 { 00:06:57.130 "name": "BaseBdev2", 00:06:57.130 "uuid": "5e22b1e4-5ab7-459f-90bf-b47634e9df8f", 00:06:57.130 "is_configured": true, 00:06:57.130 "data_offset": 2048, 00:06:57.130 "data_size": 63488 00:06:57.130 } 00:06:57.130 ] 00:06:57.130 }' 00:06:57.130 11:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:57.130 11:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.390 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:57.390 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:57.390 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:57.390 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:57.390 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:06:57.390 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:57.390 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:57.390 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.390 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:57.390 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.390 [2024-12-06 11:49:55.090247] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:57.390 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.390 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:57.390 "name": "Existed_Raid", 00:06:57.390 "aliases": [ 00:06:57.390 "1930c632-efe6-475f-ade5-591f065363a2" 00:06:57.390 ], 00:06:57.390 "product_name": "Raid Volume", 00:06:57.390 "block_size": 512, 00:06:57.390 "num_blocks": 126976, 00:06:57.390 "uuid": "1930c632-efe6-475f-ade5-591f065363a2", 00:06:57.390 "assigned_rate_limits": { 00:06:57.390 "rw_ios_per_sec": 0, 00:06:57.390 "rw_mbytes_per_sec": 0, 00:06:57.390 "r_mbytes_per_sec": 0, 00:06:57.390 "w_mbytes_per_sec": 0 00:06:57.390 }, 00:06:57.390 "claimed": false, 00:06:57.390 "zoned": false, 00:06:57.390 "supported_io_types": { 00:06:57.390 "read": true, 00:06:57.390 "write": true, 00:06:57.390 "unmap": true, 00:06:57.390 "flush": true, 00:06:57.390 "reset": true, 00:06:57.390 "nvme_admin": false, 00:06:57.390 "nvme_io": false, 00:06:57.390 "nvme_io_md": false, 00:06:57.390 "write_zeroes": true, 00:06:57.390 "zcopy": false, 00:06:57.390 "get_zone_info": false, 00:06:57.390 "zone_management": false, 00:06:57.390 "zone_append": false, 00:06:57.390 "compare": false, 00:06:57.390 "compare_and_write": false, 00:06:57.390 "abort": false, 00:06:57.390 "seek_hole": false, 00:06:57.390 "seek_data": false, 00:06:57.390 "copy": false, 00:06:57.390 "nvme_iov_md": false 00:06:57.390 }, 00:06:57.390 "memory_domains": [ 00:06:57.390 { 00:06:57.390 "dma_device_id": "system", 00:06:57.390 "dma_device_type": 1 00:06:57.390 }, 00:06:57.390 { 00:06:57.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.390 "dma_device_type": 2 00:06:57.390 }, 00:06:57.390 { 00:06:57.390 "dma_device_id": "system", 00:06:57.390 "dma_device_type": 1 00:06:57.390 }, 00:06:57.390 { 00:06:57.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.390 "dma_device_type": 2 00:06:57.390 } 00:06:57.390 ], 00:06:57.390 "driver_specific": { 00:06:57.390 "raid": { 00:06:57.390 "uuid": "1930c632-efe6-475f-ade5-591f065363a2", 00:06:57.390 "strip_size_kb": 64, 00:06:57.390 "state": "online", 00:06:57.390 "raid_level": "concat", 00:06:57.390 "superblock": true, 00:06:57.390 "num_base_bdevs": 2, 00:06:57.390 "num_base_bdevs_discovered": 2, 00:06:57.390 "num_base_bdevs_operational": 2, 00:06:57.390 "base_bdevs_list": [ 00:06:57.390 { 00:06:57.390 "name": "BaseBdev1", 00:06:57.390 "uuid": "8cce7fa2-3e9e-4e8d-b740-8b63e8ecdf92", 00:06:57.390 "is_configured": true, 00:06:57.390 "data_offset": 2048, 00:06:57.390 "data_size": 63488 00:06:57.390 }, 00:06:57.390 { 00:06:57.390 "name": "BaseBdev2", 00:06:57.390 "uuid": "5e22b1e4-5ab7-459f-90bf-b47634e9df8f", 00:06:57.390 "is_configured": true, 00:06:57.390 "data_offset": 2048, 00:06:57.390 "data_size": 63488 00:06:57.390 } 00:06:57.390 ] 00:06:57.390 } 00:06:57.390 } 00:06:57.390 }' 00:06:57.390 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:57.649 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:57.649 BaseBdev2' 00:06:57.649 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:57.649 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:57.649 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:57.649 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:57.649 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:57.649 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.649 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.649 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.649 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:57.649 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:57.649 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:57.649 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.650 [2024-12-06 11:49:55.309650] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:57.650 [2024-12-06 11:49:55.309682] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:57.650 [2024-12-06 11:49:55.309736] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:57.650 "name": "Existed_Raid", 00:06:57.650 "uuid": "1930c632-efe6-475f-ade5-591f065363a2", 00:06:57.650 "strip_size_kb": 64, 00:06:57.650 "state": "offline", 00:06:57.650 "raid_level": "concat", 00:06:57.650 "superblock": true, 00:06:57.650 "num_base_bdevs": 2, 00:06:57.650 "num_base_bdevs_discovered": 1, 00:06:57.650 "num_base_bdevs_operational": 1, 00:06:57.650 "base_bdevs_list": [ 00:06:57.650 { 00:06:57.650 "name": null, 00:06:57.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.650 "is_configured": false, 00:06:57.650 "data_offset": 0, 00:06:57.650 "data_size": 63488 00:06:57.650 }, 00:06:57.650 { 00:06:57.650 "name": "BaseBdev2", 00:06:57.650 "uuid": "5e22b1e4-5ab7-459f-90bf-b47634e9df8f", 00:06:57.650 "is_configured": true, 00:06:57.650 "data_offset": 2048, 00:06:57.650 "data_size": 63488 00:06:57.650 } 00:06:57.650 ] 00:06:57.650 }' 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:57.650 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.218 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:58.218 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:58.218 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.218 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.218 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.218 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:58.218 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.218 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:58.218 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:58.218 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:58.218 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.218 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.218 [2024-12-06 11:49:55.760216] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:58.218 [2024-12-06 11:49:55.760343] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:06:58.218 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.218 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:58.218 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:58.218 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.218 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:58.218 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.218 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.218 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.218 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:58.218 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:58.218 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:58.218 11:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72980 00:06:58.218 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 72980 ']' 00:06:58.218 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 72980 00:06:58.219 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:06:58.219 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.219 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72980 00:06:58.219 killing process with pid 72980 00:06:58.219 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:58.219 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:58.219 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72980' 00:06:58.219 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 72980 00:06:58.219 [2024-12-06 11:49:55.866371] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:58.219 11:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 72980 00:06:58.219 [2024-12-06 11:49:55.867350] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:58.479 11:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:06:58.479 00:06:58.479 real 0m3.864s 00:06:58.479 user 0m6.065s 00:06:58.479 sys 0m0.756s 00:06:58.479 ************************************ 00:06:58.479 END TEST raid_state_function_test_sb 00:06:58.479 ************************************ 00:06:58.479 11:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.479 11:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.479 11:49:56 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:06:58.479 11:49:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:58.479 11:49:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.479 11:49:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:58.479 ************************************ 00:06:58.479 START TEST raid_superblock_test 00:06:58.479 ************************************ 00:06:58.479 11:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:06:58.479 11:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:06:58.479 11:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:06:58.479 11:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:06:58.479 11:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:06:58.479 11:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:06:58.479 11:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:06:58.479 11:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:06:58.479 11:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:06:58.479 11:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:06:58.479 11:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:06:58.479 11:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:06:58.479 11:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:06:58.479 11:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:06:58.479 11:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:06:58.479 11:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:06:58.479 11:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:06:58.479 11:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73221 00:06:58.479 11:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:06:58.479 11:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73221 00:06:58.479 11:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 73221 ']' 00:06:58.479 11:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.479 11:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.479 11:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.479 11:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.479 11:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.739 [2024-12-06 11:49:56.266769] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:58.739 [2024-12-06 11:49:56.266961] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73221 ] 00:06:58.739 [2024-12-06 11:49:56.411447] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.739 [2024-12-06 11:49:56.455406] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.999 [2024-12-06 11:49:56.497819] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:58.999 [2024-12-06 11:49:56.497928] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.569 malloc1 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.569 [2024-12-06 11:49:57.116166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:59.569 [2024-12-06 11:49:57.116292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:59.569 [2024-12-06 11:49:57.116347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:06:59.569 [2024-12-06 11:49:57.116362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:59.569 [2024-12-06 11:49:57.118409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:59.569 [2024-12-06 11:49:57.118507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:59.569 pt1 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.569 malloc2 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.569 [2024-12-06 11:49:57.155199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:59.569 [2024-12-06 11:49:57.155300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:59.569 [2024-12-06 11:49:57.155335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:59.569 [2024-12-06 11:49:57.155366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:59.569 [2024-12-06 11:49:57.157539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:59.569 [2024-12-06 11:49:57.157607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:59.569 pt2 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.569 [2024-12-06 11:49:57.167229] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:59.569 [2024-12-06 11:49:57.169196] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:59.569 [2024-12-06 11:49:57.169400] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:59.569 [2024-12-06 11:49:57.169456] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:59.569 [2024-12-06 11:49:57.169751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:59.569 [2024-12-06 11:49:57.169919] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:59.569 [2024-12-06 11:49:57.169961] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:06:59.569 [2024-12-06 11:49:57.170124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.569 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.569 "name": "raid_bdev1", 00:06:59.569 "uuid": "9061a402-8b78-4458-b1ae-d508bc10fa37", 00:06:59.569 "strip_size_kb": 64, 00:06:59.569 "state": "online", 00:06:59.569 "raid_level": "concat", 00:06:59.569 "superblock": true, 00:06:59.569 "num_base_bdevs": 2, 00:06:59.569 "num_base_bdevs_discovered": 2, 00:06:59.569 "num_base_bdevs_operational": 2, 00:06:59.570 "base_bdevs_list": [ 00:06:59.570 { 00:06:59.570 "name": "pt1", 00:06:59.570 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:59.570 "is_configured": true, 00:06:59.570 "data_offset": 2048, 00:06:59.570 "data_size": 63488 00:06:59.570 }, 00:06:59.570 { 00:06:59.570 "name": "pt2", 00:06:59.570 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:59.570 "is_configured": true, 00:06:59.570 "data_offset": 2048, 00:06:59.570 "data_size": 63488 00:06:59.570 } 00:06:59.570 ] 00:06:59.570 }' 00:06:59.570 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.570 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.139 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:00.139 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:00.139 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:00.139 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:00.139 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:00.140 [2024-12-06 11:49:57.598714] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:00.140 "name": "raid_bdev1", 00:07:00.140 "aliases": [ 00:07:00.140 "9061a402-8b78-4458-b1ae-d508bc10fa37" 00:07:00.140 ], 00:07:00.140 "product_name": "Raid Volume", 00:07:00.140 "block_size": 512, 00:07:00.140 "num_blocks": 126976, 00:07:00.140 "uuid": "9061a402-8b78-4458-b1ae-d508bc10fa37", 00:07:00.140 "assigned_rate_limits": { 00:07:00.140 "rw_ios_per_sec": 0, 00:07:00.140 "rw_mbytes_per_sec": 0, 00:07:00.140 "r_mbytes_per_sec": 0, 00:07:00.140 "w_mbytes_per_sec": 0 00:07:00.140 }, 00:07:00.140 "claimed": false, 00:07:00.140 "zoned": false, 00:07:00.140 "supported_io_types": { 00:07:00.140 "read": true, 00:07:00.140 "write": true, 00:07:00.140 "unmap": true, 00:07:00.140 "flush": true, 00:07:00.140 "reset": true, 00:07:00.140 "nvme_admin": false, 00:07:00.140 "nvme_io": false, 00:07:00.140 "nvme_io_md": false, 00:07:00.140 "write_zeroes": true, 00:07:00.140 "zcopy": false, 00:07:00.140 "get_zone_info": false, 00:07:00.140 "zone_management": false, 00:07:00.140 "zone_append": false, 00:07:00.140 "compare": false, 00:07:00.140 "compare_and_write": false, 00:07:00.140 "abort": false, 00:07:00.140 "seek_hole": false, 00:07:00.140 "seek_data": false, 00:07:00.140 "copy": false, 00:07:00.140 "nvme_iov_md": false 00:07:00.140 }, 00:07:00.140 "memory_domains": [ 00:07:00.140 { 00:07:00.140 "dma_device_id": "system", 00:07:00.140 "dma_device_type": 1 00:07:00.140 }, 00:07:00.140 { 00:07:00.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.140 "dma_device_type": 2 00:07:00.140 }, 00:07:00.140 { 00:07:00.140 "dma_device_id": "system", 00:07:00.140 "dma_device_type": 1 00:07:00.140 }, 00:07:00.140 { 00:07:00.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.140 "dma_device_type": 2 00:07:00.140 } 00:07:00.140 ], 00:07:00.140 "driver_specific": { 00:07:00.140 "raid": { 00:07:00.140 "uuid": "9061a402-8b78-4458-b1ae-d508bc10fa37", 00:07:00.140 "strip_size_kb": 64, 00:07:00.140 "state": "online", 00:07:00.140 "raid_level": "concat", 00:07:00.140 "superblock": true, 00:07:00.140 "num_base_bdevs": 2, 00:07:00.140 "num_base_bdevs_discovered": 2, 00:07:00.140 "num_base_bdevs_operational": 2, 00:07:00.140 "base_bdevs_list": [ 00:07:00.140 { 00:07:00.140 "name": "pt1", 00:07:00.140 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:00.140 "is_configured": true, 00:07:00.140 "data_offset": 2048, 00:07:00.140 "data_size": 63488 00:07:00.140 }, 00:07:00.140 { 00:07:00.140 "name": "pt2", 00:07:00.140 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:00.140 "is_configured": true, 00:07:00.140 "data_offset": 2048, 00:07:00.140 "data_size": 63488 00:07:00.140 } 00:07:00.140 ] 00:07:00.140 } 00:07:00.140 } 00:07:00.140 }' 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:00.140 pt2' 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:00.140 [2024-12-06 11:49:57.842215] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9061a402-8b78-4458-b1ae-d508bc10fa37 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9061a402-8b78-4458-b1ae-d508bc10fa37 ']' 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.140 [2024-12-06 11:49:57.889907] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:00.140 [2024-12-06 11:49:57.889934] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:00.140 [2024-12-06 11:49:57.890002] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:00.140 [2024-12-06 11:49:57.890051] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:00.140 [2024-12-06 11:49:57.890063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:07:00.140 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.401 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.401 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:00.401 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.401 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.401 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.401 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:00.401 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:00.401 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:00.401 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:00.401 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.401 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.401 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.401 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:00.401 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:00.401 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.401 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.401 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.401 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:00.401 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.401 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.401 11:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:00.401 11:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.401 [2024-12-06 11:49:58.029686] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:00.401 [2024-12-06 11:49:58.031552] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:00.401 [2024-12-06 11:49:58.031612] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:00.401 [2024-12-06 11:49:58.031662] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:00.401 [2024-12-06 11:49:58.031681] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:00.401 [2024-12-06 11:49:58.031689] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:07:00.401 request: 00:07:00.401 { 00:07:00.401 "name": "raid_bdev1", 00:07:00.401 "raid_level": "concat", 00:07:00.401 "base_bdevs": [ 00:07:00.401 "malloc1", 00:07:00.401 "malloc2" 00:07:00.401 ], 00:07:00.401 "strip_size_kb": 64, 00:07:00.401 "superblock": false, 00:07:00.401 "method": "bdev_raid_create", 00:07:00.401 "req_id": 1 00:07:00.401 } 00:07:00.401 Got JSON-RPC error response 00:07:00.401 response: 00:07:00.401 { 00:07:00.401 "code": -17, 00:07:00.401 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:00.401 } 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.401 [2024-12-06 11:49:58.093531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:00.401 [2024-12-06 11:49:58.093582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:00.401 [2024-12-06 11:49:58.093603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:00.401 [2024-12-06 11:49:58.093611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:00.401 [2024-12-06 11:49:58.095667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:00.401 [2024-12-06 11:49:58.095702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:00.401 [2024-12-06 11:49:58.095763] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:00.401 [2024-12-06 11:49:58.095791] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:00.401 pt1 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.401 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:00.401 "name": "raid_bdev1", 00:07:00.401 "uuid": "9061a402-8b78-4458-b1ae-d508bc10fa37", 00:07:00.401 "strip_size_kb": 64, 00:07:00.401 "state": "configuring", 00:07:00.401 "raid_level": "concat", 00:07:00.401 "superblock": true, 00:07:00.402 "num_base_bdevs": 2, 00:07:00.402 "num_base_bdevs_discovered": 1, 00:07:00.402 "num_base_bdevs_operational": 2, 00:07:00.402 "base_bdevs_list": [ 00:07:00.402 { 00:07:00.402 "name": "pt1", 00:07:00.402 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:00.402 "is_configured": true, 00:07:00.402 "data_offset": 2048, 00:07:00.402 "data_size": 63488 00:07:00.402 }, 00:07:00.402 { 00:07:00.402 "name": null, 00:07:00.402 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:00.402 "is_configured": false, 00:07:00.402 "data_offset": 2048, 00:07:00.402 "data_size": 63488 00:07:00.402 } 00:07:00.402 ] 00:07:00.402 }' 00:07:00.402 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:00.402 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.971 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:00.971 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:00.971 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:00.971 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:00.971 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.971 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.971 [2024-12-06 11:49:58.516844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:00.971 [2024-12-06 11:49:58.516937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:00.971 [2024-12-06 11:49:58.516976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:00.971 [2024-12-06 11:49:58.517003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:00.971 [2024-12-06 11:49:58.517383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:00.971 [2024-12-06 11:49:58.517443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:00.971 [2024-12-06 11:49:58.517538] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:00.971 [2024-12-06 11:49:58.517585] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:00.971 [2024-12-06 11:49:58.517687] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:00.971 [2024-12-06 11:49:58.517721] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:00.971 [2024-12-06 11:49:58.517970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:00.971 [2024-12-06 11:49:58.518099] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:00.971 [2024-12-06 11:49:58.518141] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:00.971 [2024-12-06 11:49:58.518278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:00.971 pt2 00:07:00.971 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.971 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:00.971 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:00.971 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:00.971 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:00.971 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:00.971 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:00.971 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:00.971 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:00.971 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:00.971 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:00.971 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:00.971 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:00.971 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.971 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.971 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.971 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:00.971 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.971 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:00.971 "name": "raid_bdev1", 00:07:00.971 "uuid": "9061a402-8b78-4458-b1ae-d508bc10fa37", 00:07:00.971 "strip_size_kb": 64, 00:07:00.971 "state": "online", 00:07:00.971 "raid_level": "concat", 00:07:00.971 "superblock": true, 00:07:00.971 "num_base_bdevs": 2, 00:07:00.971 "num_base_bdevs_discovered": 2, 00:07:00.971 "num_base_bdevs_operational": 2, 00:07:00.971 "base_bdevs_list": [ 00:07:00.971 { 00:07:00.971 "name": "pt1", 00:07:00.971 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:00.971 "is_configured": true, 00:07:00.971 "data_offset": 2048, 00:07:00.971 "data_size": 63488 00:07:00.971 }, 00:07:00.971 { 00:07:00.971 "name": "pt2", 00:07:00.971 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:00.971 "is_configured": true, 00:07:00.971 "data_offset": 2048, 00:07:00.971 "data_size": 63488 00:07:00.971 } 00:07:00.971 ] 00:07:00.971 }' 00:07:00.971 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:00.971 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.232 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:01.232 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:01.232 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:01.232 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:01.232 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:01.232 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:01.232 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:01.232 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.232 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.232 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:01.232 [2024-12-06 11:49:58.940376] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:01.232 11:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.232 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:01.232 "name": "raid_bdev1", 00:07:01.232 "aliases": [ 00:07:01.232 "9061a402-8b78-4458-b1ae-d508bc10fa37" 00:07:01.232 ], 00:07:01.232 "product_name": "Raid Volume", 00:07:01.232 "block_size": 512, 00:07:01.232 "num_blocks": 126976, 00:07:01.232 "uuid": "9061a402-8b78-4458-b1ae-d508bc10fa37", 00:07:01.232 "assigned_rate_limits": { 00:07:01.232 "rw_ios_per_sec": 0, 00:07:01.232 "rw_mbytes_per_sec": 0, 00:07:01.232 "r_mbytes_per_sec": 0, 00:07:01.232 "w_mbytes_per_sec": 0 00:07:01.232 }, 00:07:01.232 "claimed": false, 00:07:01.232 "zoned": false, 00:07:01.232 "supported_io_types": { 00:07:01.232 "read": true, 00:07:01.232 "write": true, 00:07:01.232 "unmap": true, 00:07:01.232 "flush": true, 00:07:01.232 "reset": true, 00:07:01.232 "nvme_admin": false, 00:07:01.232 "nvme_io": false, 00:07:01.233 "nvme_io_md": false, 00:07:01.233 "write_zeroes": true, 00:07:01.233 "zcopy": false, 00:07:01.233 "get_zone_info": false, 00:07:01.233 "zone_management": false, 00:07:01.233 "zone_append": false, 00:07:01.233 "compare": false, 00:07:01.233 "compare_and_write": false, 00:07:01.233 "abort": false, 00:07:01.233 "seek_hole": false, 00:07:01.233 "seek_data": false, 00:07:01.233 "copy": false, 00:07:01.233 "nvme_iov_md": false 00:07:01.233 }, 00:07:01.233 "memory_domains": [ 00:07:01.233 { 00:07:01.233 "dma_device_id": "system", 00:07:01.233 "dma_device_type": 1 00:07:01.233 }, 00:07:01.233 { 00:07:01.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.233 "dma_device_type": 2 00:07:01.233 }, 00:07:01.233 { 00:07:01.233 "dma_device_id": "system", 00:07:01.233 "dma_device_type": 1 00:07:01.233 }, 00:07:01.233 { 00:07:01.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.233 "dma_device_type": 2 00:07:01.233 } 00:07:01.233 ], 00:07:01.233 "driver_specific": { 00:07:01.233 "raid": { 00:07:01.233 "uuid": "9061a402-8b78-4458-b1ae-d508bc10fa37", 00:07:01.233 "strip_size_kb": 64, 00:07:01.233 "state": "online", 00:07:01.233 "raid_level": "concat", 00:07:01.233 "superblock": true, 00:07:01.233 "num_base_bdevs": 2, 00:07:01.233 "num_base_bdevs_discovered": 2, 00:07:01.233 "num_base_bdevs_operational": 2, 00:07:01.233 "base_bdevs_list": [ 00:07:01.233 { 00:07:01.233 "name": "pt1", 00:07:01.233 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:01.233 "is_configured": true, 00:07:01.233 "data_offset": 2048, 00:07:01.233 "data_size": 63488 00:07:01.233 }, 00:07:01.233 { 00:07:01.233 "name": "pt2", 00:07:01.233 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:01.233 "is_configured": true, 00:07:01.233 "data_offset": 2048, 00:07:01.233 "data_size": 63488 00:07:01.233 } 00:07:01.233 ] 00:07:01.233 } 00:07:01.233 } 00:07:01.233 }' 00:07:01.233 11:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:01.493 pt2' 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.493 [2024-12-06 11:49:59.171949] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9061a402-8b78-4458-b1ae-d508bc10fa37 '!=' 9061a402-8b78-4458-b1ae-d508bc10fa37 ']' 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73221 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 73221 ']' 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 73221 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:01.493 11:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73221 00:07:01.754 killing process with pid 73221 00:07:01.754 11:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:01.754 11:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:01.754 11:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73221' 00:07:01.754 11:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 73221 00:07:01.754 [2024-12-06 11:49:59.262535] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:01.754 [2024-12-06 11:49:59.262621] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:01.754 [2024-12-06 11:49:59.262669] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:01.754 [2024-12-06 11:49:59.262678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:01.754 11:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 73221 00:07:01.754 [2024-12-06 11:49:59.285604] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:02.014 11:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:02.014 00:07:02.014 real 0m3.342s 00:07:02.014 user 0m5.152s 00:07:02.014 sys 0m0.666s 00:07:02.014 11:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.014 11:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.014 ************************************ 00:07:02.014 END TEST raid_superblock_test 00:07:02.014 ************************************ 00:07:02.014 11:49:59 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:02.014 11:49:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:02.014 11:49:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.014 11:49:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:02.014 ************************************ 00:07:02.014 START TEST raid_read_error_test 00:07:02.014 ************************************ 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zNqnxBIgLy 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73416 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73416 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 73416 ']' 00:07:02.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.014 11:49:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.014 [2024-12-06 11:49:59.697649] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:02.014 [2024-12-06 11:49:59.697850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73416 ] 00:07:02.273 [2024-12-06 11:49:59.836633] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.273 [2024-12-06 11:49:59.882170] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.273 [2024-12-06 11:49:59.925438] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.273 [2024-12-06 11:49:59.925542] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.845 11:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.845 11:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:02.845 11:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:02.845 11:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:02.845 11:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.845 11:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.845 BaseBdev1_malloc 00:07:02.845 11:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.845 11:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:02.845 11:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.845 11:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.845 true 00:07:02.845 11:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.845 11:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:02.845 11:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.845 11:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.845 [2024-12-06 11:50:00.535281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:02.845 [2024-12-06 11:50:00.535391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:02.845 [2024-12-06 11:50:00.535417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:02.845 [2024-12-06 11:50:00.535426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:02.845 [2024-12-06 11:50:00.537526] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:02.845 [2024-12-06 11:50:00.537571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:02.845 BaseBdev1 00:07:02.845 11:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.845 11:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:02.845 11:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:02.845 11:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.845 11:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.845 BaseBdev2_malloc 00:07:02.845 11:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.845 11:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:02.845 11:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.845 11:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.845 true 00:07:02.845 11:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.845 11:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:02.845 11:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.845 11:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.845 [2024-12-06 11:50:00.593060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:02.845 [2024-12-06 11:50:00.593144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:02.845 [2024-12-06 11:50:00.593178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:02.845 [2024-12-06 11:50:00.593192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:02.845 [2024-12-06 11:50:00.596638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:02.845 [2024-12-06 11:50:00.596782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:02.845 BaseBdev2 00:07:02.845 11:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.131 11:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:03.131 11:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.131 11:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.131 [2024-12-06 11:50:00.605101] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:03.131 [2024-12-06 11:50:00.607325] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:03.131 [2024-12-06 11:50:00.607587] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:03.131 [2024-12-06 11:50:00.607643] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:03.131 [2024-12-06 11:50:00.607993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:03.131 [2024-12-06 11:50:00.608181] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:03.131 [2024-12-06 11:50:00.608263] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:03.131 [2024-12-06 11:50:00.608461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.131 11:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.131 11:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:03.131 11:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:03.131 11:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:03.131 11:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:03.131 11:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.131 11:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:03.131 11:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.131 11:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.131 11:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.131 11:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.131 11:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.131 11:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:03.131 11:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.131 11:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.131 11:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.131 11:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.131 "name": "raid_bdev1", 00:07:03.131 "uuid": "556cd57c-ff1d-4eb4-95aa-a47b2f19f776", 00:07:03.131 "strip_size_kb": 64, 00:07:03.131 "state": "online", 00:07:03.131 "raid_level": "concat", 00:07:03.131 "superblock": true, 00:07:03.131 "num_base_bdevs": 2, 00:07:03.131 "num_base_bdevs_discovered": 2, 00:07:03.131 "num_base_bdevs_operational": 2, 00:07:03.131 "base_bdevs_list": [ 00:07:03.131 { 00:07:03.131 "name": "BaseBdev1", 00:07:03.131 "uuid": "672ae492-4d95-5c77-84ee-ed01b3d31d83", 00:07:03.131 "is_configured": true, 00:07:03.131 "data_offset": 2048, 00:07:03.131 "data_size": 63488 00:07:03.131 }, 00:07:03.131 { 00:07:03.131 "name": "BaseBdev2", 00:07:03.131 "uuid": "10fd9cc1-52c9-5066-8757-2836fe3ab96a", 00:07:03.131 "is_configured": true, 00:07:03.131 "data_offset": 2048, 00:07:03.131 "data_size": 63488 00:07:03.131 } 00:07:03.131 ] 00:07:03.131 }' 00:07:03.131 11:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.131 11:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.414 11:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:03.414 11:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:03.414 [2024-12-06 11:50:01.148522] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:04.352 11:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:04.352 11:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.352 11:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.352 11:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.352 11:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:04.352 11:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:04.352 11:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:04.352 11:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:04.352 11:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:04.352 11:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:04.352 11:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:04.352 11:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:04.352 11:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:04.352 11:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:04.352 11:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:04.352 11:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:04.352 11:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:04.352 11:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.352 11:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:04.352 11:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.352 11:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.352 11:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.611 11:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:04.611 "name": "raid_bdev1", 00:07:04.611 "uuid": "556cd57c-ff1d-4eb4-95aa-a47b2f19f776", 00:07:04.611 "strip_size_kb": 64, 00:07:04.611 "state": "online", 00:07:04.611 "raid_level": "concat", 00:07:04.611 "superblock": true, 00:07:04.611 "num_base_bdevs": 2, 00:07:04.611 "num_base_bdevs_discovered": 2, 00:07:04.611 "num_base_bdevs_operational": 2, 00:07:04.611 "base_bdevs_list": [ 00:07:04.611 { 00:07:04.611 "name": "BaseBdev1", 00:07:04.611 "uuid": "672ae492-4d95-5c77-84ee-ed01b3d31d83", 00:07:04.611 "is_configured": true, 00:07:04.611 "data_offset": 2048, 00:07:04.611 "data_size": 63488 00:07:04.611 }, 00:07:04.611 { 00:07:04.611 "name": "BaseBdev2", 00:07:04.611 "uuid": "10fd9cc1-52c9-5066-8757-2836fe3ab96a", 00:07:04.611 "is_configured": true, 00:07:04.611 "data_offset": 2048, 00:07:04.611 "data_size": 63488 00:07:04.611 } 00:07:04.611 ] 00:07:04.611 }' 00:07:04.611 11:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:04.611 11:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.869 11:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:04.869 11:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.869 11:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.869 [2024-12-06 11:50:02.514623] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:04.869 [2024-12-06 11:50:02.514709] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:04.869 [2024-12-06 11:50:02.517166] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:04.869 [2024-12-06 11:50:02.517271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:04.869 [2024-12-06 11:50:02.517329] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:04.869 [2024-12-06 11:50:02.517369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:04.869 { 00:07:04.869 "results": [ 00:07:04.869 { 00:07:04.869 "job": "raid_bdev1", 00:07:04.869 "core_mask": "0x1", 00:07:04.869 "workload": "randrw", 00:07:04.869 "percentage": 50, 00:07:04.869 "status": "finished", 00:07:04.869 "queue_depth": 1, 00:07:04.869 "io_size": 131072, 00:07:04.869 "runtime": 1.367071, 00:07:04.869 "iops": 18162.919116856403, 00:07:04.869 "mibps": 2270.3648896070504, 00:07:04.869 "io_failed": 1, 00:07:04.869 "io_timeout": 0, 00:07:04.869 "avg_latency_us": 76.07994894394403, 00:07:04.869 "min_latency_us": 24.370305676855896, 00:07:04.869 "max_latency_us": 1395.1441048034935 00:07:04.869 } 00:07:04.869 ], 00:07:04.869 "core_count": 1 00:07:04.869 } 00:07:04.869 11:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.869 11:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73416 00:07:04.869 11:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 73416 ']' 00:07:04.869 11:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 73416 00:07:04.869 11:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:04.869 11:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.869 11:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73416 00:07:04.869 killing process with pid 73416 00:07:04.870 11:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.870 11:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.870 11:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73416' 00:07:04.870 11:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 73416 00:07:04.870 [2024-12-06 11:50:02.551327] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:04.870 11:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 73416 00:07:04.870 [2024-12-06 11:50:02.566999] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:05.130 11:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:05.130 11:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zNqnxBIgLy 00:07:05.130 11:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:05.130 11:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:05.130 11:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:05.130 ************************************ 00:07:05.130 END TEST raid_read_error_test 00:07:05.130 ************************************ 00:07:05.130 11:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:05.130 11:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:05.130 11:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:05.130 00:07:05.130 real 0m3.205s 00:07:05.130 user 0m4.058s 00:07:05.130 sys 0m0.497s 00:07:05.130 11:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.130 11:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.130 11:50:02 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:05.130 11:50:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:05.130 11:50:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.130 11:50:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:05.130 ************************************ 00:07:05.130 START TEST raid_write_error_test 00:07:05.130 ************************************ 00:07:05.130 11:50:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:07:05.130 11:50:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:05.130 11:50:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:05.130 11:50:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:05.130 11:50:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:05.130 11:50:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:05.130 11:50:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:05.130 11:50:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:05.390 11:50:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:05.390 11:50:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:05.390 11:50:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:05.390 11:50:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:05.390 11:50:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:05.390 11:50:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:05.390 11:50:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:05.390 11:50:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:05.391 11:50:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:05.391 11:50:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:05.391 11:50:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:05.391 11:50:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:05.391 11:50:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:05.391 11:50:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:05.391 11:50:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:05.391 11:50:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.35wXnK9oco 00:07:05.391 11:50:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73545 00:07:05.391 11:50:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:05.391 11:50:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73545 00:07:05.391 11:50:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73545 ']' 00:07:05.391 11:50:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.391 11:50:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.391 11:50:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.391 11:50:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.391 11:50:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.391 [2024-12-06 11:50:02.976720] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:05.391 [2024-12-06 11:50:02.976924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73545 ] 00:07:05.391 [2024-12-06 11:50:03.102889] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.391 [2024-12-06 11:50:03.145777] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.651 [2024-12-06 11:50:03.187539] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.651 [2024-12-06 11:50:03.187660] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.222 BaseBdev1_malloc 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.222 true 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.222 [2024-12-06 11:50:03.841208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:06.222 [2024-12-06 11:50:03.841333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:06.222 [2024-12-06 11:50:03.841364] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:06.222 [2024-12-06 11:50:03.841373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:06.222 [2024-12-06 11:50:03.843495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:06.222 [2024-12-06 11:50:03.843533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:06.222 BaseBdev1 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.222 BaseBdev2_malloc 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.222 true 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.222 [2024-12-06 11:50:03.902874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:06.222 [2024-12-06 11:50:03.902960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:06.222 [2024-12-06 11:50:03.902997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:06.222 [2024-12-06 11:50:03.903012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:06.222 [2024-12-06 11:50:03.905736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:06.222 [2024-12-06 11:50:03.905779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:06.222 BaseBdev2 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.222 [2024-12-06 11:50:03.914875] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:06.222 [2024-12-06 11:50:03.916826] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:06.222 [2024-12-06 11:50:03.916996] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:06.222 [2024-12-06 11:50:03.917010] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:06.222 [2024-12-06 11:50:03.917261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:06.222 [2024-12-06 11:50:03.917391] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:06.222 [2024-12-06 11:50:03.917404] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:06.222 [2024-12-06 11:50:03.917528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.222 11:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.482 11:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:06.482 "name": "raid_bdev1", 00:07:06.482 "uuid": "c5559178-6989-43c3-872d-ddad5acad3d7", 00:07:06.482 "strip_size_kb": 64, 00:07:06.482 "state": "online", 00:07:06.482 "raid_level": "concat", 00:07:06.482 "superblock": true, 00:07:06.482 "num_base_bdevs": 2, 00:07:06.482 "num_base_bdevs_discovered": 2, 00:07:06.482 "num_base_bdevs_operational": 2, 00:07:06.482 "base_bdevs_list": [ 00:07:06.482 { 00:07:06.482 "name": "BaseBdev1", 00:07:06.482 "uuid": "a3a7b437-11f3-5d5e-a15b-6a028962c72d", 00:07:06.482 "is_configured": true, 00:07:06.482 "data_offset": 2048, 00:07:06.482 "data_size": 63488 00:07:06.482 }, 00:07:06.482 { 00:07:06.482 "name": "BaseBdev2", 00:07:06.482 "uuid": "ea11dd75-f752-55f4-b3a3-43f1de33927e", 00:07:06.482 "is_configured": true, 00:07:06.482 "data_offset": 2048, 00:07:06.482 "data_size": 63488 00:07:06.482 } 00:07:06.482 ] 00:07:06.482 }' 00:07:06.482 11:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:06.482 11:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.742 11:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:06.742 11:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:06.742 [2024-12-06 11:50:04.418389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:07.682 11:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:07.682 11:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.682 11:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.682 11:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.682 11:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:07.682 11:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:07.682 11:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:07.682 11:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:07.682 11:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:07.682 11:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:07.682 11:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:07.682 11:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:07.682 11:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:07.682 11:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.682 11:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.682 11:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.682 11:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.682 11:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.682 11:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:07.682 11:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.682 11:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.682 11:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.682 11:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.682 "name": "raid_bdev1", 00:07:07.682 "uuid": "c5559178-6989-43c3-872d-ddad5acad3d7", 00:07:07.682 "strip_size_kb": 64, 00:07:07.682 "state": "online", 00:07:07.682 "raid_level": "concat", 00:07:07.682 "superblock": true, 00:07:07.682 "num_base_bdevs": 2, 00:07:07.682 "num_base_bdevs_discovered": 2, 00:07:07.682 "num_base_bdevs_operational": 2, 00:07:07.682 "base_bdevs_list": [ 00:07:07.682 { 00:07:07.682 "name": "BaseBdev1", 00:07:07.682 "uuid": "a3a7b437-11f3-5d5e-a15b-6a028962c72d", 00:07:07.682 "is_configured": true, 00:07:07.682 "data_offset": 2048, 00:07:07.682 "data_size": 63488 00:07:07.682 }, 00:07:07.682 { 00:07:07.682 "name": "BaseBdev2", 00:07:07.682 "uuid": "ea11dd75-f752-55f4-b3a3-43f1de33927e", 00:07:07.682 "is_configured": true, 00:07:07.682 "data_offset": 2048, 00:07:07.682 "data_size": 63488 00:07:07.682 } 00:07:07.682 ] 00:07:07.682 }' 00:07:07.682 11:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.682 11:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.252 11:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:08.252 11:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.252 11:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.252 [2024-12-06 11:50:05.806274] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:08.252 [2024-12-06 11:50:05.806309] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:08.252 [2024-12-06 11:50:05.808783] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:08.252 [2024-12-06 11:50:05.808854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:08.252 [2024-12-06 11:50:05.808888] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:08.252 [2024-12-06 11:50:05.808897] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:08.252 { 00:07:08.252 "results": [ 00:07:08.252 { 00:07:08.252 "job": "raid_bdev1", 00:07:08.252 "core_mask": "0x1", 00:07:08.252 "workload": "randrw", 00:07:08.252 "percentage": 50, 00:07:08.252 "status": "finished", 00:07:08.252 "queue_depth": 1, 00:07:08.252 "io_size": 131072, 00:07:08.252 "runtime": 1.388793, 00:07:08.252 "iops": 18241.739409688846, 00:07:08.252 "mibps": 2280.2174262111057, 00:07:08.252 "io_failed": 1, 00:07:08.252 "io_timeout": 0, 00:07:08.252 "avg_latency_us": 75.82063496741911, 00:07:08.252 "min_latency_us": 24.370305676855896, 00:07:08.252 "max_latency_us": 1395.1441048034935 00:07:08.252 } 00:07:08.252 ], 00:07:08.252 "core_count": 1 00:07:08.252 } 00:07:08.252 11:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.252 11:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73545 00:07:08.252 11:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73545 ']' 00:07:08.252 11:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73545 00:07:08.252 11:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:08.252 11:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.252 11:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73545 00:07:08.252 11:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:08.252 11:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:08.252 killing process with pid 73545 00:07:08.252 11:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73545' 00:07:08.252 11:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73545 00:07:08.252 [2024-12-06 11:50:05.851891] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:08.252 11:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73545 00:07:08.252 [2024-12-06 11:50:05.867382] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:08.512 11:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:08.512 11:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.35wXnK9oco 00:07:08.512 11:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:08.512 11:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:08.512 11:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:08.512 11:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:08.512 11:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:08.512 11:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:08.512 00:07:08.512 real 0m3.226s 00:07:08.512 user 0m4.100s 00:07:08.512 sys 0m0.460s 00:07:08.512 11:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.512 11:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.512 ************************************ 00:07:08.512 END TEST raid_write_error_test 00:07:08.512 ************************************ 00:07:08.512 11:50:06 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:08.512 11:50:06 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:08.512 11:50:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:08.512 11:50:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.512 11:50:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:08.512 ************************************ 00:07:08.512 START TEST raid_state_function_test 00:07:08.512 ************************************ 00:07:08.512 11:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:07:08.512 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:08.512 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:08.512 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:08.512 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:08.512 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:08.512 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:08.512 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:08.512 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:08.512 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:08.512 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:08.512 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:08.512 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:08.512 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:08.513 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:08.513 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:08.513 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:08.513 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:08.513 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:08.513 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:08.513 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:08.513 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:08.513 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:08.513 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73678 00:07:08.513 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:08.513 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73678' 00:07:08.513 Process raid pid: 73678 00:07:08.513 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73678 00:07:08.513 11:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73678 ']' 00:07:08.513 11:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.513 11:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.513 11:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.513 11:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.513 11:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.773 [2024-12-06 11:50:06.278357] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:08.773 [2024-12-06 11:50:06.278486] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.773 [2024-12-06 11:50:06.425704] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.773 [2024-12-06 11:50:06.470207] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.773 [2024-12-06 11:50:06.512550] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:08.773 [2024-12-06 11:50:06.512599] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.343 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.343 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:09.343 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:09.343 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.343 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.343 [2024-12-06 11:50:07.081776] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:09.343 [2024-12-06 11:50:07.081826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:09.343 [2024-12-06 11:50:07.081838] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:09.343 [2024-12-06 11:50:07.081848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:09.343 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.343 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:09.343 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:09.343 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:09.343 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:09.343 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:09.343 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.343 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.343 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.343 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.343 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.343 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:09.343 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.343 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.343 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.603 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.603 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.603 "name": "Existed_Raid", 00:07:09.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.603 "strip_size_kb": 0, 00:07:09.603 "state": "configuring", 00:07:09.603 "raid_level": "raid1", 00:07:09.603 "superblock": false, 00:07:09.603 "num_base_bdevs": 2, 00:07:09.603 "num_base_bdevs_discovered": 0, 00:07:09.603 "num_base_bdevs_operational": 2, 00:07:09.603 "base_bdevs_list": [ 00:07:09.603 { 00:07:09.603 "name": "BaseBdev1", 00:07:09.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.603 "is_configured": false, 00:07:09.603 "data_offset": 0, 00:07:09.603 "data_size": 0 00:07:09.603 }, 00:07:09.603 { 00:07:09.603 "name": "BaseBdev2", 00:07:09.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.603 "is_configured": false, 00:07:09.603 "data_offset": 0, 00:07:09.603 "data_size": 0 00:07:09.603 } 00:07:09.603 ] 00:07:09.603 }' 00:07:09.603 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.603 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.864 [2024-12-06 11:50:07.521044] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:09.864 [2024-12-06 11:50:07.521088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.864 [2024-12-06 11:50:07.533023] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:09.864 [2024-12-06 11:50:07.533065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:09.864 [2024-12-06 11:50:07.533082] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:09.864 [2024-12-06 11:50:07.533092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.864 [2024-12-06 11:50:07.554006] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:09.864 BaseBdev1 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.864 [ 00:07:09.864 { 00:07:09.864 "name": "BaseBdev1", 00:07:09.864 "aliases": [ 00:07:09.864 "d38b17f4-d5fd-45ad-8e37-51d7a411b641" 00:07:09.864 ], 00:07:09.864 "product_name": "Malloc disk", 00:07:09.864 "block_size": 512, 00:07:09.864 "num_blocks": 65536, 00:07:09.864 "uuid": "d38b17f4-d5fd-45ad-8e37-51d7a411b641", 00:07:09.864 "assigned_rate_limits": { 00:07:09.864 "rw_ios_per_sec": 0, 00:07:09.864 "rw_mbytes_per_sec": 0, 00:07:09.864 "r_mbytes_per_sec": 0, 00:07:09.864 "w_mbytes_per_sec": 0 00:07:09.864 }, 00:07:09.864 "claimed": true, 00:07:09.864 "claim_type": "exclusive_write", 00:07:09.864 "zoned": false, 00:07:09.864 "supported_io_types": { 00:07:09.864 "read": true, 00:07:09.864 "write": true, 00:07:09.864 "unmap": true, 00:07:09.864 "flush": true, 00:07:09.864 "reset": true, 00:07:09.864 "nvme_admin": false, 00:07:09.864 "nvme_io": false, 00:07:09.864 "nvme_io_md": false, 00:07:09.864 "write_zeroes": true, 00:07:09.864 "zcopy": true, 00:07:09.864 "get_zone_info": false, 00:07:09.864 "zone_management": false, 00:07:09.864 "zone_append": false, 00:07:09.864 "compare": false, 00:07:09.864 "compare_and_write": false, 00:07:09.864 "abort": true, 00:07:09.864 "seek_hole": false, 00:07:09.864 "seek_data": false, 00:07:09.864 "copy": true, 00:07:09.864 "nvme_iov_md": false 00:07:09.864 }, 00:07:09.864 "memory_domains": [ 00:07:09.864 { 00:07:09.864 "dma_device_id": "system", 00:07:09.864 "dma_device_type": 1 00:07:09.864 }, 00:07:09.864 { 00:07:09.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.864 "dma_device_type": 2 00:07:09.864 } 00:07:09.864 ], 00:07:09.864 "driver_specific": {} 00:07:09.864 } 00:07:09.864 ] 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.864 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.865 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.125 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.125 "name": "Existed_Raid", 00:07:10.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.125 "strip_size_kb": 0, 00:07:10.125 "state": "configuring", 00:07:10.125 "raid_level": "raid1", 00:07:10.125 "superblock": false, 00:07:10.125 "num_base_bdevs": 2, 00:07:10.125 "num_base_bdevs_discovered": 1, 00:07:10.125 "num_base_bdevs_operational": 2, 00:07:10.125 "base_bdevs_list": [ 00:07:10.125 { 00:07:10.125 "name": "BaseBdev1", 00:07:10.125 "uuid": "d38b17f4-d5fd-45ad-8e37-51d7a411b641", 00:07:10.125 "is_configured": true, 00:07:10.125 "data_offset": 0, 00:07:10.125 "data_size": 65536 00:07:10.125 }, 00:07:10.125 { 00:07:10.125 "name": "BaseBdev2", 00:07:10.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.125 "is_configured": false, 00:07:10.125 "data_offset": 0, 00:07:10.125 "data_size": 0 00:07:10.125 } 00:07:10.125 ] 00:07:10.125 }' 00:07:10.125 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.125 11:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.384 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:10.384 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.384 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.384 [2024-12-06 11:50:08.013274] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:10.384 [2024-12-06 11:50:08.013329] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:10.384 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.384 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:10.384 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.385 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.385 [2024-12-06 11:50:08.021294] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:10.385 [2024-12-06 11:50:08.023082] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:10.385 [2024-12-06 11:50:08.023123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:10.385 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.385 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:10.385 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:10.385 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:10.385 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:10.385 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:10.385 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:10.385 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:10.385 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:10.385 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.385 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.385 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.385 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.385 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.385 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:10.385 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.385 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.385 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.385 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.385 "name": "Existed_Raid", 00:07:10.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.385 "strip_size_kb": 0, 00:07:10.385 "state": "configuring", 00:07:10.385 "raid_level": "raid1", 00:07:10.385 "superblock": false, 00:07:10.385 "num_base_bdevs": 2, 00:07:10.385 "num_base_bdevs_discovered": 1, 00:07:10.385 "num_base_bdevs_operational": 2, 00:07:10.385 "base_bdevs_list": [ 00:07:10.385 { 00:07:10.385 "name": "BaseBdev1", 00:07:10.385 "uuid": "d38b17f4-d5fd-45ad-8e37-51d7a411b641", 00:07:10.385 "is_configured": true, 00:07:10.385 "data_offset": 0, 00:07:10.385 "data_size": 65536 00:07:10.385 }, 00:07:10.385 { 00:07:10.385 "name": "BaseBdev2", 00:07:10.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.385 "is_configured": false, 00:07:10.385 "data_offset": 0, 00:07:10.385 "data_size": 0 00:07:10.385 } 00:07:10.385 ] 00:07:10.385 }' 00:07:10.385 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.385 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.954 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:10.954 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.954 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.954 [2024-12-06 11:50:08.448698] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:10.954 [2024-12-06 11:50:08.448862] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:10.954 [2024-12-06 11:50:08.448901] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:10.954 [2024-12-06 11:50:08.449925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:10.954 [2024-12-06 11:50:08.450437] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:10.955 [2024-12-06 11:50:08.450513] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:10.955 [2024-12-06 11:50:08.451084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.955 BaseBdev2 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.955 [ 00:07:10.955 { 00:07:10.955 "name": "BaseBdev2", 00:07:10.955 "aliases": [ 00:07:10.955 "71f930ec-629a-4f1c-8232-6a7e04ea1a27" 00:07:10.955 ], 00:07:10.955 "product_name": "Malloc disk", 00:07:10.955 "block_size": 512, 00:07:10.955 "num_blocks": 65536, 00:07:10.955 "uuid": "71f930ec-629a-4f1c-8232-6a7e04ea1a27", 00:07:10.955 "assigned_rate_limits": { 00:07:10.955 "rw_ios_per_sec": 0, 00:07:10.955 "rw_mbytes_per_sec": 0, 00:07:10.955 "r_mbytes_per_sec": 0, 00:07:10.955 "w_mbytes_per_sec": 0 00:07:10.955 }, 00:07:10.955 "claimed": true, 00:07:10.955 "claim_type": "exclusive_write", 00:07:10.955 "zoned": false, 00:07:10.955 "supported_io_types": { 00:07:10.955 "read": true, 00:07:10.955 "write": true, 00:07:10.955 "unmap": true, 00:07:10.955 "flush": true, 00:07:10.955 "reset": true, 00:07:10.955 "nvme_admin": false, 00:07:10.955 "nvme_io": false, 00:07:10.955 "nvme_io_md": false, 00:07:10.955 "write_zeroes": true, 00:07:10.955 "zcopy": true, 00:07:10.955 "get_zone_info": false, 00:07:10.955 "zone_management": false, 00:07:10.955 "zone_append": false, 00:07:10.955 "compare": false, 00:07:10.955 "compare_and_write": false, 00:07:10.955 "abort": true, 00:07:10.955 "seek_hole": false, 00:07:10.955 "seek_data": false, 00:07:10.955 "copy": true, 00:07:10.955 "nvme_iov_md": false 00:07:10.955 }, 00:07:10.955 "memory_domains": [ 00:07:10.955 { 00:07:10.955 "dma_device_id": "system", 00:07:10.955 "dma_device_type": 1 00:07:10.955 }, 00:07:10.955 { 00:07:10.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.955 "dma_device_type": 2 00:07:10.955 } 00:07:10.955 ], 00:07:10.955 "driver_specific": {} 00:07:10.955 } 00:07:10.955 ] 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.955 "name": "Existed_Raid", 00:07:10.955 "uuid": "60a69567-4bcc-4a26-a036-6a0667f9c7c3", 00:07:10.955 "strip_size_kb": 0, 00:07:10.955 "state": "online", 00:07:10.955 "raid_level": "raid1", 00:07:10.955 "superblock": false, 00:07:10.955 "num_base_bdevs": 2, 00:07:10.955 "num_base_bdevs_discovered": 2, 00:07:10.955 "num_base_bdevs_operational": 2, 00:07:10.955 "base_bdevs_list": [ 00:07:10.955 { 00:07:10.955 "name": "BaseBdev1", 00:07:10.955 "uuid": "d38b17f4-d5fd-45ad-8e37-51d7a411b641", 00:07:10.955 "is_configured": true, 00:07:10.955 "data_offset": 0, 00:07:10.955 "data_size": 65536 00:07:10.955 }, 00:07:10.955 { 00:07:10.955 "name": "BaseBdev2", 00:07:10.955 "uuid": "71f930ec-629a-4f1c-8232-6a7e04ea1a27", 00:07:10.955 "is_configured": true, 00:07:10.955 "data_offset": 0, 00:07:10.955 "data_size": 65536 00:07:10.955 } 00:07:10.955 ] 00:07:10.955 }' 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.955 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.215 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:11.215 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:11.215 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:11.215 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:11.215 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:11.215 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:11.215 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:11.215 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.215 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.215 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:11.215 [2024-12-06 11:50:08.856224] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:11.215 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.215 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:11.215 "name": "Existed_Raid", 00:07:11.215 "aliases": [ 00:07:11.215 "60a69567-4bcc-4a26-a036-6a0667f9c7c3" 00:07:11.215 ], 00:07:11.216 "product_name": "Raid Volume", 00:07:11.216 "block_size": 512, 00:07:11.216 "num_blocks": 65536, 00:07:11.216 "uuid": "60a69567-4bcc-4a26-a036-6a0667f9c7c3", 00:07:11.216 "assigned_rate_limits": { 00:07:11.216 "rw_ios_per_sec": 0, 00:07:11.216 "rw_mbytes_per_sec": 0, 00:07:11.216 "r_mbytes_per_sec": 0, 00:07:11.216 "w_mbytes_per_sec": 0 00:07:11.216 }, 00:07:11.216 "claimed": false, 00:07:11.216 "zoned": false, 00:07:11.216 "supported_io_types": { 00:07:11.216 "read": true, 00:07:11.216 "write": true, 00:07:11.216 "unmap": false, 00:07:11.216 "flush": false, 00:07:11.216 "reset": true, 00:07:11.216 "nvme_admin": false, 00:07:11.216 "nvme_io": false, 00:07:11.216 "nvme_io_md": false, 00:07:11.216 "write_zeroes": true, 00:07:11.216 "zcopy": false, 00:07:11.216 "get_zone_info": false, 00:07:11.216 "zone_management": false, 00:07:11.216 "zone_append": false, 00:07:11.216 "compare": false, 00:07:11.216 "compare_and_write": false, 00:07:11.216 "abort": false, 00:07:11.216 "seek_hole": false, 00:07:11.216 "seek_data": false, 00:07:11.216 "copy": false, 00:07:11.216 "nvme_iov_md": false 00:07:11.216 }, 00:07:11.216 "memory_domains": [ 00:07:11.216 { 00:07:11.216 "dma_device_id": "system", 00:07:11.216 "dma_device_type": 1 00:07:11.216 }, 00:07:11.216 { 00:07:11.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.216 "dma_device_type": 2 00:07:11.216 }, 00:07:11.216 { 00:07:11.216 "dma_device_id": "system", 00:07:11.216 "dma_device_type": 1 00:07:11.216 }, 00:07:11.216 { 00:07:11.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.216 "dma_device_type": 2 00:07:11.216 } 00:07:11.216 ], 00:07:11.216 "driver_specific": { 00:07:11.216 "raid": { 00:07:11.216 "uuid": "60a69567-4bcc-4a26-a036-6a0667f9c7c3", 00:07:11.216 "strip_size_kb": 0, 00:07:11.216 "state": "online", 00:07:11.216 "raid_level": "raid1", 00:07:11.216 "superblock": false, 00:07:11.216 "num_base_bdevs": 2, 00:07:11.216 "num_base_bdevs_discovered": 2, 00:07:11.216 "num_base_bdevs_operational": 2, 00:07:11.216 "base_bdevs_list": [ 00:07:11.216 { 00:07:11.216 "name": "BaseBdev1", 00:07:11.216 "uuid": "d38b17f4-d5fd-45ad-8e37-51d7a411b641", 00:07:11.216 "is_configured": true, 00:07:11.216 "data_offset": 0, 00:07:11.216 "data_size": 65536 00:07:11.216 }, 00:07:11.216 { 00:07:11.216 "name": "BaseBdev2", 00:07:11.216 "uuid": "71f930ec-629a-4f1c-8232-6a7e04ea1a27", 00:07:11.216 "is_configured": true, 00:07:11.216 "data_offset": 0, 00:07:11.216 "data_size": 65536 00:07:11.216 } 00:07:11.216 ] 00:07:11.216 } 00:07:11.216 } 00:07:11.216 }' 00:07:11.216 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:11.216 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:11.216 BaseBdev2' 00:07:11.216 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:11.216 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:11.216 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:11.216 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:11.216 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:11.216 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.216 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.476 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.476 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:11.476 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:11.476 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:11.476 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:11.476 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.476 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.476 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:11.476 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.476 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:11.476 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:11.476 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:11.476 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.476 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.476 [2024-12-06 11:50:09.051634] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:11.476 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.476 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:11.476 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:11.476 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:11.476 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:11.476 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:11.476 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:11.476 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:11.476 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:11.476 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:11.476 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:11.476 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:11.477 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.477 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.477 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.477 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.477 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.477 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:11.477 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.477 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.477 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.477 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.477 "name": "Existed_Raid", 00:07:11.477 "uuid": "60a69567-4bcc-4a26-a036-6a0667f9c7c3", 00:07:11.477 "strip_size_kb": 0, 00:07:11.477 "state": "online", 00:07:11.477 "raid_level": "raid1", 00:07:11.477 "superblock": false, 00:07:11.477 "num_base_bdevs": 2, 00:07:11.477 "num_base_bdevs_discovered": 1, 00:07:11.477 "num_base_bdevs_operational": 1, 00:07:11.477 "base_bdevs_list": [ 00:07:11.477 { 00:07:11.477 "name": null, 00:07:11.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.477 "is_configured": false, 00:07:11.477 "data_offset": 0, 00:07:11.477 "data_size": 65536 00:07:11.477 }, 00:07:11.477 { 00:07:11.477 "name": "BaseBdev2", 00:07:11.477 "uuid": "71f930ec-629a-4f1c-8232-6a7e04ea1a27", 00:07:11.477 "is_configured": true, 00:07:11.477 "data_offset": 0, 00:07:11.477 "data_size": 65536 00:07:11.477 } 00:07:11.477 ] 00:07:11.477 }' 00:07:11.477 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.477 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.047 [2024-12-06 11:50:09.570019] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:12.047 [2024-12-06 11:50:09.570114] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:12.047 [2024-12-06 11:50:09.581691] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:12.047 [2024-12-06 11:50:09.581736] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:12.047 [2024-12-06 11:50:09.581749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73678 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73678 ']' 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73678 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73678 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:12.047 killing process with pid 73678 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73678' 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73678 00:07:12.047 [2024-12-06 11:50:09.679615] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:12.047 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73678 00:07:12.047 [2024-12-06 11:50:09.680576] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:12.309 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:12.309 00:07:12.309 real 0m3.743s 00:07:12.309 user 0m5.868s 00:07:12.309 sys 0m0.733s 00:07:12.309 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.309 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.309 ************************************ 00:07:12.309 END TEST raid_state_function_test 00:07:12.309 ************************************ 00:07:12.309 11:50:09 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:12.309 11:50:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:12.309 11:50:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.309 11:50:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:12.309 ************************************ 00:07:12.309 START TEST raid_state_function_test_sb 00:07:12.309 ************************************ 00:07:12.309 11:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:07:12.309 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:12.309 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:12.309 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:12.309 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:12.309 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:12.309 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:12.309 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:12.309 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:12.309 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:12.309 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:12.309 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:12.309 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:12.309 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:12.309 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:12.309 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:12.309 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:12.309 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:12.309 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:12.309 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:12.309 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:12.309 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:12.309 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:12.309 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73914 00:07:12.309 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:12.309 Process raid pid: 73914 00:07:12.309 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73914' 00:07:12.309 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73914 00:07:12.309 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73914 ']' 00:07:12.309 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.309 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.309 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.309 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.309 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.569 [2024-12-06 11:50:10.085379] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:12.569 [2024-12-06 11:50:10.085510] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.569 [2024-12-06 11:50:10.231258] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.569 [2024-12-06 11:50:10.276122] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.569 [2024-12-06 11:50:10.318617] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.569 [2024-12-06 11:50:10.318658] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.510 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.510 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:13.510 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:13.510 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.510 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.510 [2024-12-06 11:50:10.908478] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:13.510 [2024-12-06 11:50:10.908526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:13.510 [2024-12-06 11:50:10.908538] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:13.510 [2024-12-06 11:50:10.908547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:13.510 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.510 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:13.510 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:13.510 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:13.510 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:13.510 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:13.510 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:13.510 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:13.510 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:13.510 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:13.510 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:13.510 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.510 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:13.510 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.510 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.511 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.511 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.511 "name": "Existed_Raid", 00:07:13.511 "uuid": "9cc031b3-824c-45ca-849b-d522479b96a0", 00:07:13.511 "strip_size_kb": 0, 00:07:13.511 "state": "configuring", 00:07:13.511 "raid_level": "raid1", 00:07:13.511 "superblock": true, 00:07:13.511 "num_base_bdevs": 2, 00:07:13.511 "num_base_bdevs_discovered": 0, 00:07:13.511 "num_base_bdevs_operational": 2, 00:07:13.511 "base_bdevs_list": [ 00:07:13.511 { 00:07:13.511 "name": "BaseBdev1", 00:07:13.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.511 "is_configured": false, 00:07:13.511 "data_offset": 0, 00:07:13.511 "data_size": 0 00:07:13.511 }, 00:07:13.511 { 00:07:13.511 "name": "BaseBdev2", 00:07:13.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.511 "is_configured": false, 00:07:13.511 "data_offset": 0, 00:07:13.511 "data_size": 0 00:07:13.511 } 00:07:13.511 ] 00:07:13.511 }' 00:07:13.511 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.511 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.772 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:13.772 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.772 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.772 [2024-12-06 11:50:11.355607] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:13.772 [2024-12-06 11:50:11.355701] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:13.772 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.772 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:13.772 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.772 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.772 [2024-12-06 11:50:11.367588] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:13.772 [2024-12-06 11:50:11.367668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:13.772 [2024-12-06 11:50:11.367705] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:13.772 [2024-12-06 11:50:11.367729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:13.772 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.772 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:13.772 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.772 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.772 [2024-12-06 11:50:11.388472] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:13.772 BaseBdev1 00:07:13.772 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.772 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:13.772 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:13.772 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:13.772 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:13.772 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:13.772 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:13.772 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:13.772 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.772 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.772 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.772 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:13.772 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.772 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.772 [ 00:07:13.772 { 00:07:13.772 "name": "BaseBdev1", 00:07:13.772 "aliases": [ 00:07:13.772 "db431008-de82-4d39-8771-6ad33b05a6f9" 00:07:13.772 ], 00:07:13.772 "product_name": "Malloc disk", 00:07:13.772 "block_size": 512, 00:07:13.772 "num_blocks": 65536, 00:07:13.772 "uuid": "db431008-de82-4d39-8771-6ad33b05a6f9", 00:07:13.772 "assigned_rate_limits": { 00:07:13.772 "rw_ios_per_sec": 0, 00:07:13.772 "rw_mbytes_per_sec": 0, 00:07:13.772 "r_mbytes_per_sec": 0, 00:07:13.772 "w_mbytes_per_sec": 0 00:07:13.772 }, 00:07:13.772 "claimed": true, 00:07:13.772 "claim_type": "exclusive_write", 00:07:13.772 "zoned": false, 00:07:13.772 "supported_io_types": { 00:07:13.772 "read": true, 00:07:13.772 "write": true, 00:07:13.772 "unmap": true, 00:07:13.772 "flush": true, 00:07:13.772 "reset": true, 00:07:13.773 "nvme_admin": false, 00:07:13.773 "nvme_io": false, 00:07:13.773 "nvme_io_md": false, 00:07:13.773 "write_zeroes": true, 00:07:13.773 "zcopy": true, 00:07:13.773 "get_zone_info": false, 00:07:13.773 "zone_management": false, 00:07:13.773 "zone_append": false, 00:07:13.773 "compare": false, 00:07:13.773 "compare_and_write": false, 00:07:13.773 "abort": true, 00:07:13.773 "seek_hole": false, 00:07:13.773 "seek_data": false, 00:07:13.773 "copy": true, 00:07:13.773 "nvme_iov_md": false 00:07:13.773 }, 00:07:13.773 "memory_domains": [ 00:07:13.773 { 00:07:13.773 "dma_device_id": "system", 00:07:13.773 "dma_device_type": 1 00:07:13.773 }, 00:07:13.773 { 00:07:13.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.773 "dma_device_type": 2 00:07:13.773 } 00:07:13.773 ], 00:07:13.773 "driver_specific": {} 00:07:13.773 } 00:07:13.773 ] 00:07:13.773 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.773 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:13.773 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:13.773 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:13.773 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:13.773 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:13.773 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:13.773 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:13.773 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:13.773 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:13.773 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:13.773 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:13.773 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.773 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:13.773 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.773 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.773 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.773 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.773 "name": "Existed_Raid", 00:07:13.773 "uuid": "c4dde73a-7c38-474c-a5d6-64f75cdf7fa3", 00:07:13.773 "strip_size_kb": 0, 00:07:13.773 "state": "configuring", 00:07:13.773 "raid_level": "raid1", 00:07:13.773 "superblock": true, 00:07:13.773 "num_base_bdevs": 2, 00:07:13.773 "num_base_bdevs_discovered": 1, 00:07:13.773 "num_base_bdevs_operational": 2, 00:07:13.773 "base_bdevs_list": [ 00:07:13.773 { 00:07:13.773 "name": "BaseBdev1", 00:07:13.773 "uuid": "db431008-de82-4d39-8771-6ad33b05a6f9", 00:07:13.773 "is_configured": true, 00:07:13.773 "data_offset": 2048, 00:07:13.773 "data_size": 63488 00:07:13.773 }, 00:07:13.773 { 00:07:13.773 "name": "BaseBdev2", 00:07:13.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.773 "is_configured": false, 00:07:13.773 "data_offset": 0, 00:07:13.773 "data_size": 0 00:07:13.773 } 00:07:13.773 ] 00:07:13.773 }' 00:07:13.773 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.773 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.346 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:14.346 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.346 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.346 [2024-12-06 11:50:11.871722] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:14.346 [2024-12-06 11:50:11.871814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:14.346 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.346 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:14.346 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.346 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.347 [2024-12-06 11:50:11.883761] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:14.347 [2024-12-06 11:50:11.885692] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:14.347 [2024-12-06 11:50:11.885787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:14.347 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.347 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:14.347 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:14.347 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:14.347 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.347 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:14.347 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:14.347 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:14.347 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.347 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.347 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.347 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.347 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.347 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.347 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.347 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.347 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.347 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.347 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.347 "name": "Existed_Raid", 00:07:14.347 "uuid": "cfbf8379-de4b-4bc2-a36b-f652cdfe0516", 00:07:14.347 "strip_size_kb": 0, 00:07:14.347 "state": "configuring", 00:07:14.347 "raid_level": "raid1", 00:07:14.347 "superblock": true, 00:07:14.347 "num_base_bdevs": 2, 00:07:14.347 "num_base_bdevs_discovered": 1, 00:07:14.347 "num_base_bdevs_operational": 2, 00:07:14.347 "base_bdevs_list": [ 00:07:14.347 { 00:07:14.347 "name": "BaseBdev1", 00:07:14.347 "uuid": "db431008-de82-4d39-8771-6ad33b05a6f9", 00:07:14.347 "is_configured": true, 00:07:14.347 "data_offset": 2048, 00:07:14.347 "data_size": 63488 00:07:14.347 }, 00:07:14.347 { 00:07:14.347 "name": "BaseBdev2", 00:07:14.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.347 "is_configured": false, 00:07:14.347 "data_offset": 0, 00:07:14.347 "data_size": 0 00:07:14.347 } 00:07:14.347 ] 00:07:14.347 }' 00:07:14.347 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.347 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.608 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:14.608 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.608 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.608 [2024-12-06 11:50:12.337970] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:14.608 [2024-12-06 11:50:12.338203] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:14.608 [2024-12-06 11:50:12.338222] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:14.608 [2024-12-06 11:50:12.338592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:14.608 BaseBdev2 00:07:14.608 [2024-12-06 11:50:12.338778] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:14.608 [2024-12-06 11:50:12.338810] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:14.608 [2024-12-06 11:50:12.338955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.608 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.608 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:14.608 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:14.608 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:14.608 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:14.608 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:14.608 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:14.608 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:14.608 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.608 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.608 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.608 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:14.608 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.608 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.608 [ 00:07:14.608 { 00:07:14.608 "name": "BaseBdev2", 00:07:14.608 "aliases": [ 00:07:14.608 "e5874bf2-d9f3-4149-8ff6-de70b05e0721" 00:07:14.608 ], 00:07:14.868 "product_name": "Malloc disk", 00:07:14.868 "block_size": 512, 00:07:14.868 "num_blocks": 65536, 00:07:14.868 "uuid": "e5874bf2-d9f3-4149-8ff6-de70b05e0721", 00:07:14.868 "assigned_rate_limits": { 00:07:14.868 "rw_ios_per_sec": 0, 00:07:14.868 "rw_mbytes_per_sec": 0, 00:07:14.868 "r_mbytes_per_sec": 0, 00:07:14.868 "w_mbytes_per_sec": 0 00:07:14.868 }, 00:07:14.868 "claimed": true, 00:07:14.868 "claim_type": "exclusive_write", 00:07:14.868 "zoned": false, 00:07:14.868 "supported_io_types": { 00:07:14.868 "read": true, 00:07:14.868 "write": true, 00:07:14.868 "unmap": true, 00:07:14.868 "flush": true, 00:07:14.868 "reset": true, 00:07:14.868 "nvme_admin": false, 00:07:14.868 "nvme_io": false, 00:07:14.868 "nvme_io_md": false, 00:07:14.868 "write_zeroes": true, 00:07:14.868 "zcopy": true, 00:07:14.868 "get_zone_info": false, 00:07:14.868 "zone_management": false, 00:07:14.868 "zone_append": false, 00:07:14.868 "compare": false, 00:07:14.868 "compare_and_write": false, 00:07:14.868 "abort": true, 00:07:14.868 "seek_hole": false, 00:07:14.868 "seek_data": false, 00:07:14.868 "copy": true, 00:07:14.868 "nvme_iov_md": false 00:07:14.868 }, 00:07:14.868 "memory_domains": [ 00:07:14.868 { 00:07:14.868 "dma_device_id": "system", 00:07:14.868 "dma_device_type": 1 00:07:14.868 }, 00:07:14.868 { 00:07:14.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.869 "dma_device_type": 2 00:07:14.869 } 00:07:14.869 ], 00:07:14.869 "driver_specific": {} 00:07:14.869 } 00:07:14.869 ] 00:07:14.869 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.869 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:14.869 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:14.869 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:14.869 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:14.869 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.869 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:14.869 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:14.869 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:14.869 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.869 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.869 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.869 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.869 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.869 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.869 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.869 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.869 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.869 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.869 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.869 "name": "Existed_Raid", 00:07:14.869 "uuid": "cfbf8379-de4b-4bc2-a36b-f652cdfe0516", 00:07:14.869 "strip_size_kb": 0, 00:07:14.869 "state": "online", 00:07:14.869 "raid_level": "raid1", 00:07:14.869 "superblock": true, 00:07:14.869 "num_base_bdevs": 2, 00:07:14.869 "num_base_bdevs_discovered": 2, 00:07:14.869 "num_base_bdevs_operational": 2, 00:07:14.869 "base_bdevs_list": [ 00:07:14.869 { 00:07:14.869 "name": "BaseBdev1", 00:07:14.869 "uuid": "db431008-de82-4d39-8771-6ad33b05a6f9", 00:07:14.869 "is_configured": true, 00:07:14.869 "data_offset": 2048, 00:07:14.869 "data_size": 63488 00:07:14.869 }, 00:07:14.869 { 00:07:14.869 "name": "BaseBdev2", 00:07:14.869 "uuid": "e5874bf2-d9f3-4149-8ff6-de70b05e0721", 00:07:14.869 "is_configured": true, 00:07:14.869 "data_offset": 2048, 00:07:14.869 "data_size": 63488 00:07:14.869 } 00:07:14.869 ] 00:07:14.869 }' 00:07:14.869 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.869 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.128 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:15.128 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:15.128 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:15.128 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:15.128 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:15.128 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:15.128 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:15.128 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:15.128 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.128 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.128 [2024-12-06 11:50:12.797494] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.128 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.128 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:15.128 "name": "Existed_Raid", 00:07:15.128 "aliases": [ 00:07:15.128 "cfbf8379-de4b-4bc2-a36b-f652cdfe0516" 00:07:15.128 ], 00:07:15.128 "product_name": "Raid Volume", 00:07:15.128 "block_size": 512, 00:07:15.128 "num_blocks": 63488, 00:07:15.128 "uuid": "cfbf8379-de4b-4bc2-a36b-f652cdfe0516", 00:07:15.128 "assigned_rate_limits": { 00:07:15.128 "rw_ios_per_sec": 0, 00:07:15.128 "rw_mbytes_per_sec": 0, 00:07:15.128 "r_mbytes_per_sec": 0, 00:07:15.128 "w_mbytes_per_sec": 0 00:07:15.128 }, 00:07:15.128 "claimed": false, 00:07:15.128 "zoned": false, 00:07:15.128 "supported_io_types": { 00:07:15.128 "read": true, 00:07:15.128 "write": true, 00:07:15.128 "unmap": false, 00:07:15.128 "flush": false, 00:07:15.128 "reset": true, 00:07:15.128 "nvme_admin": false, 00:07:15.128 "nvme_io": false, 00:07:15.128 "nvme_io_md": false, 00:07:15.128 "write_zeroes": true, 00:07:15.128 "zcopy": false, 00:07:15.128 "get_zone_info": false, 00:07:15.128 "zone_management": false, 00:07:15.128 "zone_append": false, 00:07:15.128 "compare": false, 00:07:15.128 "compare_and_write": false, 00:07:15.128 "abort": false, 00:07:15.128 "seek_hole": false, 00:07:15.128 "seek_data": false, 00:07:15.128 "copy": false, 00:07:15.128 "nvme_iov_md": false 00:07:15.128 }, 00:07:15.128 "memory_domains": [ 00:07:15.128 { 00:07:15.128 "dma_device_id": "system", 00:07:15.128 "dma_device_type": 1 00:07:15.128 }, 00:07:15.129 { 00:07:15.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.129 "dma_device_type": 2 00:07:15.129 }, 00:07:15.129 { 00:07:15.129 "dma_device_id": "system", 00:07:15.129 "dma_device_type": 1 00:07:15.129 }, 00:07:15.129 { 00:07:15.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.129 "dma_device_type": 2 00:07:15.129 } 00:07:15.129 ], 00:07:15.129 "driver_specific": { 00:07:15.129 "raid": { 00:07:15.129 "uuid": "cfbf8379-de4b-4bc2-a36b-f652cdfe0516", 00:07:15.129 "strip_size_kb": 0, 00:07:15.129 "state": "online", 00:07:15.129 "raid_level": "raid1", 00:07:15.129 "superblock": true, 00:07:15.129 "num_base_bdevs": 2, 00:07:15.129 "num_base_bdevs_discovered": 2, 00:07:15.129 "num_base_bdevs_operational": 2, 00:07:15.129 "base_bdevs_list": [ 00:07:15.129 { 00:07:15.129 "name": "BaseBdev1", 00:07:15.129 "uuid": "db431008-de82-4d39-8771-6ad33b05a6f9", 00:07:15.129 "is_configured": true, 00:07:15.129 "data_offset": 2048, 00:07:15.129 "data_size": 63488 00:07:15.129 }, 00:07:15.129 { 00:07:15.129 "name": "BaseBdev2", 00:07:15.129 "uuid": "e5874bf2-d9f3-4149-8ff6-de70b05e0721", 00:07:15.129 "is_configured": true, 00:07:15.129 "data_offset": 2048, 00:07:15.129 "data_size": 63488 00:07:15.129 } 00:07:15.129 ] 00:07:15.129 } 00:07:15.129 } 00:07:15.129 }' 00:07:15.129 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:15.129 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:15.129 BaseBdev2' 00:07:15.129 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.389 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:15.389 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:15.389 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.389 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:15.389 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.389 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.389 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.389 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:15.389 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:15.389 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:15.389 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:15.389 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.389 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.389 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.389 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.389 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:15.389 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:15.389 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:15.389 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.389 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.389 [2024-12-06 11:50:12.996940] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:15.389 11:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.389 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:15.389 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:15.389 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:15.389 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:15.389 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:15.389 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:15.389 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.389 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:15.389 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:15.389 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:15.389 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:15.389 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.389 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.389 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.389 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.389 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.389 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.389 11:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.389 11:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.389 11:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.389 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.389 "name": "Existed_Raid", 00:07:15.389 "uuid": "cfbf8379-de4b-4bc2-a36b-f652cdfe0516", 00:07:15.389 "strip_size_kb": 0, 00:07:15.389 "state": "online", 00:07:15.389 "raid_level": "raid1", 00:07:15.389 "superblock": true, 00:07:15.389 "num_base_bdevs": 2, 00:07:15.389 "num_base_bdevs_discovered": 1, 00:07:15.389 "num_base_bdevs_operational": 1, 00:07:15.389 "base_bdevs_list": [ 00:07:15.389 { 00:07:15.389 "name": null, 00:07:15.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.389 "is_configured": false, 00:07:15.389 "data_offset": 0, 00:07:15.389 "data_size": 63488 00:07:15.389 }, 00:07:15.389 { 00:07:15.389 "name": "BaseBdev2", 00:07:15.389 "uuid": "e5874bf2-d9f3-4149-8ff6-de70b05e0721", 00:07:15.389 "is_configured": true, 00:07:15.389 "data_offset": 2048, 00:07:15.389 "data_size": 63488 00:07:15.389 } 00:07:15.389 ] 00:07:15.389 }' 00:07:15.389 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.389 11:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.973 [2024-12-06 11:50:13.495473] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:15.973 [2024-12-06 11:50:13.495579] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:15.973 [2024-12-06 11:50:13.507220] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:15.973 [2024-12-06 11:50:13.507286] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:15.973 [2024-12-06 11:50:13.507299] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73914 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73914 ']' 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73914 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73914 00:07:15.973 killing process with pid 73914 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73914' 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73914 00:07:15.973 [2024-12-06 11:50:13.605159] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:15.973 11:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73914 00:07:15.973 [2024-12-06 11:50:13.606158] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:16.233 11:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:16.233 00:07:16.233 real 0m3.838s 00:07:16.233 user 0m6.075s 00:07:16.233 sys 0m0.739s 00:07:16.233 ************************************ 00:07:16.233 END TEST raid_state_function_test_sb 00:07:16.233 ************************************ 00:07:16.233 11:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.233 11:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.233 11:50:13 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:16.233 11:50:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:16.233 11:50:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.233 11:50:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:16.233 ************************************ 00:07:16.233 START TEST raid_superblock_test 00:07:16.233 ************************************ 00:07:16.233 11:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:07:16.233 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:16.233 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:16.233 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:16.233 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:16.233 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:16.233 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:16.233 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:16.233 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:16.233 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:16.233 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:16.233 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:16.233 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:16.233 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:16.233 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:16.233 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:16.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.233 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74150 00:07:16.233 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74150 00:07:16.233 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:16.233 11:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74150 ']' 00:07:16.233 11:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.233 11:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.233 11:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.233 11:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.233 11:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.233 [2024-12-06 11:50:13.986327] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:16.233 [2024-12-06 11:50:13.986542] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74150 ] 00:07:16.493 [2024-12-06 11:50:14.128657] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.493 [2024-12-06 11:50:14.174092] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.493 [2024-12-06 11:50:14.217491] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.493 [2024-12-06 11:50:14.217616] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.431 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.431 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:17.431 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:17.431 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:17.431 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:17.431 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:17.431 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:17.431 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:17.431 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:17.431 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:17.431 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:17.431 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.431 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.431 malloc1 00:07:17.431 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.431 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:17.431 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.431 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.431 [2024-12-06 11:50:14.875882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:17.431 [2024-12-06 11:50:14.875993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:17.431 [2024-12-06 11:50:14.876016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:17.431 [2024-12-06 11:50:14.876030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:17.431 [2024-12-06 11:50:14.878088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:17.431 [2024-12-06 11:50:14.878168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:17.431 pt1 00:07:17.431 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.431 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:17.431 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.432 malloc2 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.432 [2024-12-06 11:50:14.913190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:17.432 [2024-12-06 11:50:14.913300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:17.432 [2024-12-06 11:50:14.913337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:17.432 [2024-12-06 11:50:14.913369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:17.432 [2024-12-06 11:50:14.915497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:17.432 [2024-12-06 11:50:14.915575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:17.432 pt2 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.432 [2024-12-06 11:50:14.925203] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:17.432 [2024-12-06 11:50:14.927048] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:17.432 [2024-12-06 11:50:14.927254] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:17.432 [2024-12-06 11:50:14.927311] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:17.432 [2024-12-06 11:50:14.927584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:17.432 [2024-12-06 11:50:14.927756] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:17.432 [2024-12-06 11:50:14.927802] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:07:17.432 [2024-12-06 11:50:14.927970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.432 "name": "raid_bdev1", 00:07:17.432 "uuid": "d8e69722-656d-4c52-8c0b-313c58097502", 00:07:17.432 "strip_size_kb": 0, 00:07:17.432 "state": "online", 00:07:17.432 "raid_level": "raid1", 00:07:17.432 "superblock": true, 00:07:17.432 "num_base_bdevs": 2, 00:07:17.432 "num_base_bdevs_discovered": 2, 00:07:17.432 "num_base_bdevs_operational": 2, 00:07:17.432 "base_bdevs_list": [ 00:07:17.432 { 00:07:17.432 "name": "pt1", 00:07:17.432 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:17.432 "is_configured": true, 00:07:17.432 "data_offset": 2048, 00:07:17.432 "data_size": 63488 00:07:17.432 }, 00:07:17.432 { 00:07:17.432 "name": "pt2", 00:07:17.432 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:17.432 "is_configured": true, 00:07:17.432 "data_offset": 2048, 00:07:17.432 "data_size": 63488 00:07:17.432 } 00:07:17.432 ] 00:07:17.432 }' 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.432 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.691 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:17.691 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:17.691 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:17.691 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:17.691 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:17.691 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:17.691 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:17.691 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.691 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:17.691 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.691 [2024-12-06 11:50:15.376745] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:17.691 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.691 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:17.691 "name": "raid_bdev1", 00:07:17.691 "aliases": [ 00:07:17.691 "d8e69722-656d-4c52-8c0b-313c58097502" 00:07:17.691 ], 00:07:17.691 "product_name": "Raid Volume", 00:07:17.691 "block_size": 512, 00:07:17.691 "num_blocks": 63488, 00:07:17.691 "uuid": "d8e69722-656d-4c52-8c0b-313c58097502", 00:07:17.691 "assigned_rate_limits": { 00:07:17.691 "rw_ios_per_sec": 0, 00:07:17.691 "rw_mbytes_per_sec": 0, 00:07:17.691 "r_mbytes_per_sec": 0, 00:07:17.691 "w_mbytes_per_sec": 0 00:07:17.691 }, 00:07:17.691 "claimed": false, 00:07:17.691 "zoned": false, 00:07:17.691 "supported_io_types": { 00:07:17.691 "read": true, 00:07:17.691 "write": true, 00:07:17.691 "unmap": false, 00:07:17.691 "flush": false, 00:07:17.691 "reset": true, 00:07:17.691 "nvme_admin": false, 00:07:17.691 "nvme_io": false, 00:07:17.691 "nvme_io_md": false, 00:07:17.691 "write_zeroes": true, 00:07:17.691 "zcopy": false, 00:07:17.691 "get_zone_info": false, 00:07:17.691 "zone_management": false, 00:07:17.691 "zone_append": false, 00:07:17.691 "compare": false, 00:07:17.691 "compare_and_write": false, 00:07:17.691 "abort": false, 00:07:17.691 "seek_hole": false, 00:07:17.691 "seek_data": false, 00:07:17.691 "copy": false, 00:07:17.691 "nvme_iov_md": false 00:07:17.691 }, 00:07:17.691 "memory_domains": [ 00:07:17.691 { 00:07:17.691 "dma_device_id": "system", 00:07:17.691 "dma_device_type": 1 00:07:17.691 }, 00:07:17.691 { 00:07:17.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.691 "dma_device_type": 2 00:07:17.691 }, 00:07:17.691 { 00:07:17.692 "dma_device_id": "system", 00:07:17.692 "dma_device_type": 1 00:07:17.692 }, 00:07:17.692 { 00:07:17.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.692 "dma_device_type": 2 00:07:17.692 } 00:07:17.692 ], 00:07:17.692 "driver_specific": { 00:07:17.692 "raid": { 00:07:17.692 "uuid": "d8e69722-656d-4c52-8c0b-313c58097502", 00:07:17.692 "strip_size_kb": 0, 00:07:17.692 "state": "online", 00:07:17.692 "raid_level": "raid1", 00:07:17.692 "superblock": true, 00:07:17.692 "num_base_bdevs": 2, 00:07:17.692 "num_base_bdevs_discovered": 2, 00:07:17.692 "num_base_bdevs_operational": 2, 00:07:17.692 "base_bdevs_list": [ 00:07:17.692 { 00:07:17.692 "name": "pt1", 00:07:17.692 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:17.692 "is_configured": true, 00:07:17.692 "data_offset": 2048, 00:07:17.692 "data_size": 63488 00:07:17.692 }, 00:07:17.692 { 00:07:17.692 "name": "pt2", 00:07:17.692 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:17.692 "is_configured": true, 00:07:17.692 "data_offset": 2048, 00:07:17.692 "data_size": 63488 00:07:17.692 } 00:07:17.692 ] 00:07:17.692 } 00:07:17.692 } 00:07:17.692 }' 00:07:17.692 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:17.951 pt2' 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.951 [2024-12-06 11:50:15.592291] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d8e69722-656d-4c52-8c0b-313c58097502 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d8e69722-656d-4c52-8c0b-313c58097502 ']' 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.951 [2024-12-06 11:50:15.635963] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:17.951 [2024-12-06 11:50:15.635995] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:17.951 [2024-12-06 11:50:15.636102] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:17.951 [2024-12-06 11:50:15.636171] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:17.951 [2024-12-06 11:50:15.636182] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:17.951 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.952 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.952 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.952 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:17.952 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:17.952 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.952 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.212 [2024-12-06 11:50:15.763724] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:18.212 [2024-12-06 11:50:15.765582] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:18.212 [2024-12-06 11:50:15.765648] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:18.212 [2024-12-06 11:50:15.765693] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:18.212 [2024-12-06 11:50:15.765709] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:18.212 [2024-12-06 11:50:15.765718] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:07:18.212 request: 00:07:18.212 { 00:07:18.212 "name": "raid_bdev1", 00:07:18.212 "raid_level": "raid1", 00:07:18.212 "base_bdevs": [ 00:07:18.212 "malloc1", 00:07:18.212 "malloc2" 00:07:18.212 ], 00:07:18.212 "superblock": false, 00:07:18.212 "method": "bdev_raid_create", 00:07:18.212 "req_id": 1 00:07:18.212 } 00:07:18.212 Got JSON-RPC error response 00:07:18.212 response: 00:07:18.212 { 00:07:18.212 "code": -17, 00:07:18.212 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:18.212 } 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.212 [2024-12-06 11:50:15.827568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:18.212 [2024-12-06 11:50:15.827674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:18.212 [2024-12-06 11:50:15.827716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:18.212 [2024-12-06 11:50:15.827747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:18.212 [2024-12-06 11:50:15.829895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:18.212 [2024-12-06 11:50:15.829965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:18.212 [2024-12-06 11:50:15.830063] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:18.212 [2024-12-06 11:50:15.830121] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:18.212 pt1 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.212 "name": "raid_bdev1", 00:07:18.212 "uuid": "d8e69722-656d-4c52-8c0b-313c58097502", 00:07:18.212 "strip_size_kb": 0, 00:07:18.212 "state": "configuring", 00:07:18.212 "raid_level": "raid1", 00:07:18.212 "superblock": true, 00:07:18.212 "num_base_bdevs": 2, 00:07:18.212 "num_base_bdevs_discovered": 1, 00:07:18.212 "num_base_bdevs_operational": 2, 00:07:18.212 "base_bdevs_list": [ 00:07:18.212 { 00:07:18.212 "name": "pt1", 00:07:18.212 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:18.212 "is_configured": true, 00:07:18.212 "data_offset": 2048, 00:07:18.212 "data_size": 63488 00:07:18.212 }, 00:07:18.212 { 00:07:18.212 "name": null, 00:07:18.212 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:18.212 "is_configured": false, 00:07:18.212 "data_offset": 2048, 00:07:18.212 "data_size": 63488 00:07:18.212 } 00:07:18.212 ] 00:07:18.212 }' 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.212 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.904 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:18.904 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:18.904 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:18.904 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:18.904 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.904 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.905 [2024-12-06 11:50:16.271143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:18.905 [2024-12-06 11:50:16.271208] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:18.905 [2024-12-06 11:50:16.271230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:18.905 [2024-12-06 11:50:16.271247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:18.905 [2024-12-06 11:50:16.271625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:18.905 [2024-12-06 11:50:16.271642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:18.905 [2024-12-06 11:50:16.271713] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:18.905 [2024-12-06 11:50:16.271733] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:18.905 [2024-12-06 11:50:16.271833] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:18.905 [2024-12-06 11:50:16.271843] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:18.905 [2024-12-06 11:50:16.272073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:18.905 [2024-12-06 11:50:16.272181] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:18.905 [2024-12-06 11:50:16.272194] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:18.905 [2024-12-06 11:50:16.272313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.905 pt2 00:07:18.905 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.905 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:18.905 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:18.905 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:18.905 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:18.905 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:18.905 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:18.905 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:18.905 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.905 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.905 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.905 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.905 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.905 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.905 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:18.905 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.905 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.905 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.905 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.905 "name": "raid_bdev1", 00:07:18.905 "uuid": "d8e69722-656d-4c52-8c0b-313c58097502", 00:07:18.905 "strip_size_kb": 0, 00:07:18.905 "state": "online", 00:07:18.905 "raid_level": "raid1", 00:07:18.905 "superblock": true, 00:07:18.905 "num_base_bdevs": 2, 00:07:18.905 "num_base_bdevs_discovered": 2, 00:07:18.905 "num_base_bdevs_operational": 2, 00:07:18.905 "base_bdevs_list": [ 00:07:18.905 { 00:07:18.905 "name": "pt1", 00:07:18.905 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:18.905 "is_configured": true, 00:07:18.905 "data_offset": 2048, 00:07:18.905 "data_size": 63488 00:07:18.905 }, 00:07:18.905 { 00:07:18.905 "name": "pt2", 00:07:18.905 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:18.905 "is_configured": true, 00:07:18.905 "data_offset": 2048, 00:07:18.905 "data_size": 63488 00:07:18.905 } 00:07:18.905 ] 00:07:18.905 }' 00:07:18.905 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.905 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.172 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:19.172 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:19.172 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:19.172 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:19.172 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:19.172 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:19.172 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:19.172 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:19.172 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.172 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.172 [2024-12-06 11:50:16.714593] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:19.172 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.172 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:19.172 "name": "raid_bdev1", 00:07:19.172 "aliases": [ 00:07:19.172 "d8e69722-656d-4c52-8c0b-313c58097502" 00:07:19.172 ], 00:07:19.172 "product_name": "Raid Volume", 00:07:19.172 "block_size": 512, 00:07:19.172 "num_blocks": 63488, 00:07:19.172 "uuid": "d8e69722-656d-4c52-8c0b-313c58097502", 00:07:19.172 "assigned_rate_limits": { 00:07:19.172 "rw_ios_per_sec": 0, 00:07:19.172 "rw_mbytes_per_sec": 0, 00:07:19.172 "r_mbytes_per_sec": 0, 00:07:19.172 "w_mbytes_per_sec": 0 00:07:19.172 }, 00:07:19.172 "claimed": false, 00:07:19.172 "zoned": false, 00:07:19.172 "supported_io_types": { 00:07:19.172 "read": true, 00:07:19.172 "write": true, 00:07:19.172 "unmap": false, 00:07:19.172 "flush": false, 00:07:19.172 "reset": true, 00:07:19.172 "nvme_admin": false, 00:07:19.172 "nvme_io": false, 00:07:19.172 "nvme_io_md": false, 00:07:19.172 "write_zeroes": true, 00:07:19.172 "zcopy": false, 00:07:19.172 "get_zone_info": false, 00:07:19.172 "zone_management": false, 00:07:19.172 "zone_append": false, 00:07:19.172 "compare": false, 00:07:19.172 "compare_and_write": false, 00:07:19.172 "abort": false, 00:07:19.172 "seek_hole": false, 00:07:19.172 "seek_data": false, 00:07:19.172 "copy": false, 00:07:19.172 "nvme_iov_md": false 00:07:19.172 }, 00:07:19.172 "memory_domains": [ 00:07:19.172 { 00:07:19.172 "dma_device_id": "system", 00:07:19.172 "dma_device_type": 1 00:07:19.172 }, 00:07:19.172 { 00:07:19.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.172 "dma_device_type": 2 00:07:19.172 }, 00:07:19.172 { 00:07:19.172 "dma_device_id": "system", 00:07:19.172 "dma_device_type": 1 00:07:19.172 }, 00:07:19.172 { 00:07:19.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.172 "dma_device_type": 2 00:07:19.172 } 00:07:19.172 ], 00:07:19.172 "driver_specific": { 00:07:19.172 "raid": { 00:07:19.172 "uuid": "d8e69722-656d-4c52-8c0b-313c58097502", 00:07:19.172 "strip_size_kb": 0, 00:07:19.172 "state": "online", 00:07:19.172 "raid_level": "raid1", 00:07:19.172 "superblock": true, 00:07:19.172 "num_base_bdevs": 2, 00:07:19.172 "num_base_bdevs_discovered": 2, 00:07:19.172 "num_base_bdevs_operational": 2, 00:07:19.172 "base_bdevs_list": [ 00:07:19.172 { 00:07:19.172 "name": "pt1", 00:07:19.172 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:19.172 "is_configured": true, 00:07:19.172 "data_offset": 2048, 00:07:19.172 "data_size": 63488 00:07:19.172 }, 00:07:19.172 { 00:07:19.172 "name": "pt2", 00:07:19.172 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:19.172 "is_configured": true, 00:07:19.172 "data_offset": 2048, 00:07:19.172 "data_size": 63488 00:07:19.172 } 00:07:19.172 ] 00:07:19.172 } 00:07:19.172 } 00:07:19.172 }' 00:07:19.172 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:19.172 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:19.172 pt2' 00:07:19.173 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.173 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:19.173 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:19.173 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.173 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:19.173 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.173 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.173 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.173 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:19.173 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:19.173 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:19.173 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:19.173 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.173 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.173 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.173 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.432 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:19.432 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:19.432 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:19.432 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.432 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.432 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:19.432 [2024-12-06 11:50:16.938174] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:19.432 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.432 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d8e69722-656d-4c52-8c0b-313c58097502 '!=' d8e69722-656d-4c52-8c0b-313c58097502 ']' 00:07:19.432 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:19.432 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:19.432 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:19.432 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:19.432 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.432 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.432 [2024-12-06 11:50:16.977934] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:19.432 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.432 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:19.432 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:19.432 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:19.432 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:19.432 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:19.433 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:19.433 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.433 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.433 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.433 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.433 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.433 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:19.433 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.433 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.433 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.433 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.433 "name": "raid_bdev1", 00:07:19.433 "uuid": "d8e69722-656d-4c52-8c0b-313c58097502", 00:07:19.433 "strip_size_kb": 0, 00:07:19.433 "state": "online", 00:07:19.433 "raid_level": "raid1", 00:07:19.433 "superblock": true, 00:07:19.433 "num_base_bdevs": 2, 00:07:19.433 "num_base_bdevs_discovered": 1, 00:07:19.433 "num_base_bdevs_operational": 1, 00:07:19.433 "base_bdevs_list": [ 00:07:19.433 { 00:07:19.433 "name": null, 00:07:19.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.433 "is_configured": false, 00:07:19.433 "data_offset": 0, 00:07:19.433 "data_size": 63488 00:07:19.433 }, 00:07:19.433 { 00:07:19.433 "name": "pt2", 00:07:19.433 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:19.433 "is_configured": true, 00:07:19.433 "data_offset": 2048, 00:07:19.433 "data_size": 63488 00:07:19.433 } 00:07:19.433 ] 00:07:19.433 }' 00:07:19.433 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.433 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.693 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:19.693 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.693 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.693 [2024-12-06 11:50:17.401162] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:19.693 [2024-12-06 11:50:17.401240] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:19.693 [2024-12-06 11:50:17.401331] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.693 [2024-12-06 11:50:17.401413] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:19.693 [2024-12-06 11:50:17.401461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:19.693 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.693 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.693 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:19.693 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.693 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.693 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.953 [2024-12-06 11:50:17.473028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:19.953 [2024-12-06 11:50:17.473084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:19.953 [2024-12-06 11:50:17.473105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:07:19.953 [2024-12-06 11:50:17.473114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:19.953 [2024-12-06 11:50:17.475208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:19.953 [2024-12-06 11:50:17.475247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:19.953 [2024-12-06 11:50:17.475317] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:19.953 [2024-12-06 11:50:17.475349] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:19.953 [2024-12-06 11:50:17.475421] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:07:19.953 [2024-12-06 11:50:17.475429] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:19.953 [2024-12-06 11:50:17.475662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:19.953 [2024-12-06 11:50:17.475787] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:07:19.953 [2024-12-06 11:50:17.475798] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:07:19.953 [2024-12-06 11:50:17.475899] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:19.953 pt2 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.953 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.953 "name": "raid_bdev1", 00:07:19.953 "uuid": "d8e69722-656d-4c52-8c0b-313c58097502", 00:07:19.953 "strip_size_kb": 0, 00:07:19.953 "state": "online", 00:07:19.953 "raid_level": "raid1", 00:07:19.953 "superblock": true, 00:07:19.953 "num_base_bdevs": 2, 00:07:19.953 "num_base_bdevs_discovered": 1, 00:07:19.953 "num_base_bdevs_operational": 1, 00:07:19.953 "base_bdevs_list": [ 00:07:19.953 { 00:07:19.953 "name": null, 00:07:19.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.954 "is_configured": false, 00:07:19.954 "data_offset": 2048, 00:07:19.954 "data_size": 63488 00:07:19.954 }, 00:07:19.954 { 00:07:19.954 "name": "pt2", 00:07:19.954 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:19.954 "is_configured": true, 00:07:19.954 "data_offset": 2048, 00:07:19.954 "data_size": 63488 00:07:19.954 } 00:07:19.954 ] 00:07:19.954 }' 00:07:19.954 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.954 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.214 [2024-12-06 11:50:17.864366] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:20.214 [2024-12-06 11:50:17.864457] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:20.214 [2024-12-06 11:50:17.864564] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:20.214 [2024-12-06 11:50:17.864624] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:20.214 [2024-12-06 11:50:17.864687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.214 [2024-12-06 11:50:17.904325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:20.214 [2024-12-06 11:50:17.904421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.214 [2024-12-06 11:50:17.904452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:07:20.214 [2024-12-06 11:50:17.904483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.214 [2024-12-06 11:50:17.906554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.214 [2024-12-06 11:50:17.906641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:20.214 [2024-12-06 11:50:17.906731] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:20.214 [2024-12-06 11:50:17.906803] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:20.214 [2024-12-06 11:50:17.906928] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:20.214 [2024-12-06 11:50:17.906983] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:20.214 [2024-12-06 11:50:17.907026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:07:20.214 [2024-12-06 11:50:17.907098] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:20.214 [2024-12-06 11:50:17.907207] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:07:20.214 [2024-12-06 11:50:17.907267] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:20.214 [2024-12-06 11:50:17.907491] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:07:20.214 [2024-12-06 11:50:17.907642] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:07:20.214 [2024-12-06 11:50:17.907682] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:07:20.214 [2024-12-06 11:50:17.907823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.214 pt1 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.214 "name": "raid_bdev1", 00:07:20.214 "uuid": "d8e69722-656d-4c52-8c0b-313c58097502", 00:07:20.214 "strip_size_kb": 0, 00:07:20.214 "state": "online", 00:07:20.214 "raid_level": "raid1", 00:07:20.214 "superblock": true, 00:07:20.214 "num_base_bdevs": 2, 00:07:20.214 "num_base_bdevs_discovered": 1, 00:07:20.214 "num_base_bdevs_operational": 1, 00:07:20.214 "base_bdevs_list": [ 00:07:20.214 { 00:07:20.214 "name": null, 00:07:20.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.214 "is_configured": false, 00:07:20.214 "data_offset": 2048, 00:07:20.214 "data_size": 63488 00:07:20.214 }, 00:07:20.214 { 00:07:20.214 "name": "pt2", 00:07:20.214 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:20.214 "is_configured": true, 00:07:20.214 "data_offset": 2048, 00:07:20.214 "data_size": 63488 00:07:20.214 } 00:07:20.214 ] 00:07:20.214 }' 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.214 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.785 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:20.785 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:20.785 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.785 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.785 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.785 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:20.785 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:20.785 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:20.785 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.785 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.785 [2024-12-06 11:50:18.367702] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:20.785 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.785 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d8e69722-656d-4c52-8c0b-313c58097502 '!=' d8e69722-656d-4c52-8c0b-313c58097502 ']' 00:07:20.785 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74150 00:07:20.785 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74150 ']' 00:07:20.785 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74150 00:07:20.785 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:20.785 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:20.785 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74150 00:07:20.785 killing process with pid 74150 00:07:20.785 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:20.785 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:20.785 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74150' 00:07:20.785 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74150 00:07:20.785 [2024-12-06 11:50:18.450657] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:20.785 [2024-12-06 11:50:18.450724] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:20.785 [2024-12-06 11:50:18.450769] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:20.785 [2024-12-06 11:50:18.450776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:07:20.785 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74150 00:07:20.785 [2024-12-06 11:50:18.473750] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:21.045 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:21.045 00:07:21.045 real 0m4.809s 00:07:21.045 user 0m7.872s 00:07:21.045 sys 0m0.965s 00:07:21.045 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.045 ************************************ 00:07:21.045 END TEST raid_superblock_test 00:07:21.045 ************************************ 00:07:21.045 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.045 11:50:18 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:07:21.045 11:50:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:21.045 11:50:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.045 11:50:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:21.045 ************************************ 00:07:21.045 START TEST raid_read_error_test 00:07:21.045 ************************************ 00:07:21.045 11:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:07:21.045 11:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:21.045 11:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:21.045 11:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:21.045 11:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:21.045 11:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:21.045 11:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:21.045 11:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:21.045 11:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:21.045 11:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:21.045 11:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:21.045 11:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:21.045 11:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:21.045 11:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:21.045 11:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:21.045 11:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:21.045 11:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:21.045 11:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:21.046 11:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:21.046 11:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:21.046 11:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:21.046 11:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:21.305 11:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.EayL8WaiM6 00:07:21.305 11:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74469 00:07:21.305 11:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:21.305 11:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74469 00:07:21.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.305 11:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 74469 ']' 00:07:21.306 11:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.306 11:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.306 11:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.306 11:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.306 11:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.306 [2024-12-06 11:50:18.894050] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:21.306 [2024-12-06 11:50:18.894189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74469 ] 00:07:21.306 [2024-12-06 11:50:19.039612] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.565 [2024-12-06 11:50:19.085983] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.565 [2024-12-06 11:50:19.129403] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.565 [2024-12-06 11:50:19.129436] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.134 BaseBdev1_malloc 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.134 true 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.134 [2024-12-06 11:50:19.751206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:22.134 [2024-12-06 11:50:19.751269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:22.134 [2024-12-06 11:50:19.751294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:22.134 [2024-12-06 11:50:19.751309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:22.134 [2024-12-06 11:50:19.753364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:22.134 [2024-12-06 11:50:19.753398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:22.134 BaseBdev1 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.134 BaseBdev2_malloc 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.134 true 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.134 [2024-12-06 11:50:19.805368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:22.134 [2024-12-06 11:50:19.805447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:22.134 [2024-12-06 11:50:19.805479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:22.134 [2024-12-06 11:50:19.805493] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:22.134 [2024-12-06 11:50:19.808564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:22.134 [2024-12-06 11:50:19.808614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:22.134 BaseBdev2 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.134 [2024-12-06 11:50:19.817466] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:22.134 [2024-12-06 11:50:19.819414] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:22.134 [2024-12-06 11:50:19.819599] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:22.134 [2024-12-06 11:50:19.819614] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:22.134 [2024-12-06 11:50:19.819880] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:22.134 [2024-12-06 11:50:19.820023] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:22.134 [2024-12-06 11:50:19.820037] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:22.134 [2024-12-06 11:50:19.820174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.134 "name": "raid_bdev1", 00:07:22.134 "uuid": "fadd1933-3ed4-4a5c-a00a-d1bead9ee170", 00:07:22.134 "strip_size_kb": 0, 00:07:22.134 "state": "online", 00:07:22.134 "raid_level": "raid1", 00:07:22.134 "superblock": true, 00:07:22.134 "num_base_bdevs": 2, 00:07:22.134 "num_base_bdevs_discovered": 2, 00:07:22.134 "num_base_bdevs_operational": 2, 00:07:22.134 "base_bdevs_list": [ 00:07:22.134 { 00:07:22.134 "name": "BaseBdev1", 00:07:22.134 "uuid": "63569366-f0c9-5690-978a-a8fce61d09c1", 00:07:22.134 "is_configured": true, 00:07:22.134 "data_offset": 2048, 00:07:22.134 "data_size": 63488 00:07:22.134 }, 00:07:22.134 { 00:07:22.134 "name": "BaseBdev2", 00:07:22.134 "uuid": "3d324917-08ce-55e9-be16-88fe5f7480b6", 00:07:22.134 "is_configured": true, 00:07:22.134 "data_offset": 2048, 00:07:22.134 "data_size": 63488 00:07:22.134 } 00:07:22.134 ] 00:07:22.134 }' 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.134 11:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.703 11:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:22.703 11:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:22.703 [2024-12-06 11:50:20.364873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:23.643 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:23.643 11:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.643 11:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.643 11:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.643 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:23.643 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:23.643 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:07:23.643 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:23.643 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:23.643 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:23.643 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:23.643 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:23.643 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:23.643 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.643 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.643 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.643 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.643 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.643 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.643 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:23.643 11:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.643 11:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.643 11:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.643 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.643 "name": "raid_bdev1", 00:07:23.643 "uuid": "fadd1933-3ed4-4a5c-a00a-d1bead9ee170", 00:07:23.643 "strip_size_kb": 0, 00:07:23.643 "state": "online", 00:07:23.643 "raid_level": "raid1", 00:07:23.643 "superblock": true, 00:07:23.643 "num_base_bdevs": 2, 00:07:23.643 "num_base_bdevs_discovered": 2, 00:07:23.643 "num_base_bdevs_operational": 2, 00:07:23.643 "base_bdevs_list": [ 00:07:23.643 { 00:07:23.643 "name": "BaseBdev1", 00:07:23.643 "uuid": "63569366-f0c9-5690-978a-a8fce61d09c1", 00:07:23.643 "is_configured": true, 00:07:23.643 "data_offset": 2048, 00:07:23.643 "data_size": 63488 00:07:23.643 }, 00:07:23.643 { 00:07:23.643 "name": "BaseBdev2", 00:07:23.643 "uuid": "3d324917-08ce-55e9-be16-88fe5f7480b6", 00:07:23.643 "is_configured": true, 00:07:23.643 "data_offset": 2048, 00:07:23.643 "data_size": 63488 00:07:23.643 } 00:07:23.643 ] 00:07:23.643 }' 00:07:23.643 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.643 11:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.214 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:24.214 11:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.214 11:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.214 [2024-12-06 11:50:21.722850] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:24.214 [2024-12-06 11:50:21.722886] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:24.214 [2024-12-06 11:50:21.725307] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:24.214 [2024-12-06 11:50:21.725356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.214 [2024-12-06 11:50:21.725437] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:24.214 [2024-12-06 11:50:21.725446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:24.214 { 00:07:24.214 "results": [ 00:07:24.214 { 00:07:24.214 "job": "raid_bdev1", 00:07:24.214 "core_mask": "0x1", 00:07:24.214 "workload": "randrw", 00:07:24.214 "percentage": 50, 00:07:24.214 "status": "finished", 00:07:24.214 "queue_depth": 1, 00:07:24.214 "io_size": 131072, 00:07:24.214 "runtime": 1.358735, 00:07:24.214 "iops": 20079.7064917, 00:07:24.214 "mibps": 2509.9633114625, 00:07:24.214 "io_failed": 0, 00:07:24.214 "io_timeout": 0, 00:07:24.214 "avg_latency_us": 47.23731882242841, 00:07:24.214 "min_latency_us": 21.575545851528386, 00:07:24.214 "max_latency_us": 1359.3711790393013 00:07:24.214 } 00:07:24.214 ], 00:07:24.214 "core_count": 1 00:07:24.214 } 00:07:24.214 11:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.214 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74469 00:07:24.214 11:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 74469 ']' 00:07:24.214 11:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 74469 00:07:24.214 11:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:24.214 11:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:24.214 11:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74469 00:07:24.214 killing process with pid 74469 00:07:24.214 11:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:24.214 11:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:24.214 11:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74469' 00:07:24.214 11:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 74469 00:07:24.214 [2024-12-06 11:50:21.757409] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:24.214 11:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 74469 00:07:24.214 [2024-12-06 11:50:21.773606] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:24.474 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.EayL8WaiM6 00:07:24.474 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:24.474 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:24.474 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:24.474 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:24.474 ************************************ 00:07:24.474 END TEST raid_read_error_test 00:07:24.474 ************************************ 00:07:24.474 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:24.474 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:24.474 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:24.474 00:07:24.474 real 0m3.232s 00:07:24.474 user 0m4.110s 00:07:24.474 sys 0m0.505s 00:07:24.474 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.474 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.474 11:50:22 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:07:24.474 11:50:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:24.474 11:50:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.474 11:50:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:24.474 ************************************ 00:07:24.474 START TEST raid_write_error_test 00:07:24.474 ************************************ 00:07:24.474 11:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:07:24.474 11:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:24.474 11:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:24.474 11:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:24.474 11:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:24.474 11:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:24.474 11:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:24.474 11:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:24.474 11:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:24.474 11:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:24.474 11:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:24.474 11:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:24.474 11:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:24.474 11:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:24.474 11:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:24.474 11:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:24.474 11:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:24.474 11:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:24.474 11:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:24.474 11:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:24.474 11:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:24.474 11:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:24.474 11:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.q6AiD4x1uG 00:07:24.474 11:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74598 00:07:24.474 11:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:24.474 11:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74598 00:07:24.474 11:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 74598 ']' 00:07:24.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.474 11:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.474 11:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.475 11:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.475 11:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.475 11:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.475 [2024-12-06 11:50:22.191753] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:24.475 [2024-12-06 11:50:22.191863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74598 ] 00:07:24.734 [2024-12-06 11:50:22.334923] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.734 [2024-12-06 11:50:22.379062] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.734 [2024-12-06 11:50:22.421364] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.734 [2024-12-06 11:50:22.421405] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.303 11:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.303 11:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:25.303 11:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:25.303 11:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:25.303 11:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.303 11:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.303 BaseBdev1_malloc 00:07:25.303 11:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.303 11:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:25.303 11:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.303 11:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.303 true 00:07:25.303 11:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.303 11:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:25.303 11:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.303 11:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.303 [2024-12-06 11:50:23.051300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:25.303 [2024-12-06 11:50:23.051367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:25.303 [2024-12-06 11:50:23.051388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:25.303 [2024-12-06 11:50:23.051396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:25.303 [2024-12-06 11:50:23.053424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:25.303 [2024-12-06 11:50:23.053464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:25.303 BaseBdev1 00:07:25.303 11:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.303 11:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:25.303 11:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:25.303 11:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.303 11:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.563 BaseBdev2_malloc 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.563 true 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.563 [2024-12-06 11:50:23.106450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:25.563 [2024-12-06 11:50:23.106524] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:25.563 [2024-12-06 11:50:23.106554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:25.563 [2024-12-06 11:50:23.106566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:25.563 [2024-12-06 11:50:23.109436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:25.563 [2024-12-06 11:50:23.109484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:25.563 BaseBdev2 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.563 [2024-12-06 11:50:23.118465] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:25.563 [2024-12-06 11:50:23.120369] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:25.563 [2024-12-06 11:50:23.120543] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:25.563 [2024-12-06 11:50:23.120556] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:25.563 [2024-12-06 11:50:23.120804] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:25.563 [2024-12-06 11:50:23.120924] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:25.563 [2024-12-06 11:50:23.120942] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:25.563 [2024-12-06 11:50:23.121064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.563 "name": "raid_bdev1", 00:07:25.563 "uuid": "a4443303-5f67-49c6-b6e8-8fd0b4e95d89", 00:07:25.563 "strip_size_kb": 0, 00:07:25.563 "state": "online", 00:07:25.563 "raid_level": "raid1", 00:07:25.563 "superblock": true, 00:07:25.563 "num_base_bdevs": 2, 00:07:25.563 "num_base_bdevs_discovered": 2, 00:07:25.563 "num_base_bdevs_operational": 2, 00:07:25.563 "base_bdevs_list": [ 00:07:25.563 { 00:07:25.563 "name": "BaseBdev1", 00:07:25.563 "uuid": "153c7eef-ce13-5fb4-a05a-2069948ca4ef", 00:07:25.563 "is_configured": true, 00:07:25.563 "data_offset": 2048, 00:07:25.563 "data_size": 63488 00:07:25.563 }, 00:07:25.563 { 00:07:25.563 "name": "BaseBdev2", 00:07:25.563 "uuid": "b0b77b26-ad06-5b33-9b1e-be1e16164a18", 00:07:25.563 "is_configured": true, 00:07:25.563 "data_offset": 2048, 00:07:25.563 "data_size": 63488 00:07:25.563 } 00:07:25.563 ] 00:07:25.563 }' 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.563 11:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.823 11:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:25.823 11:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:26.084 [2024-12-06 11:50:23.653916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:27.023 11:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:27.023 11:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.023 11:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.023 [2024-12-06 11:50:24.581603] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:07:27.023 [2024-12-06 11:50:24.581738] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:27.023 [2024-12-06 11:50:24.581963] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002530 00:07:27.023 11:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.023 11:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:27.023 11:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:27.023 11:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:07:27.023 11:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:07:27.023 11:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:27.024 11:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:27.024 11:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:27.024 11:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:27.024 11:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:27.024 11:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:27.024 11:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.024 11:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.024 11:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.024 11:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.024 11:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.024 11:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:27.024 11:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.024 11:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.024 11:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.024 11:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.024 "name": "raid_bdev1", 00:07:27.024 "uuid": "a4443303-5f67-49c6-b6e8-8fd0b4e95d89", 00:07:27.024 "strip_size_kb": 0, 00:07:27.024 "state": "online", 00:07:27.024 "raid_level": "raid1", 00:07:27.024 "superblock": true, 00:07:27.024 "num_base_bdevs": 2, 00:07:27.024 "num_base_bdevs_discovered": 1, 00:07:27.024 "num_base_bdevs_operational": 1, 00:07:27.024 "base_bdevs_list": [ 00:07:27.024 { 00:07:27.024 "name": null, 00:07:27.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.024 "is_configured": false, 00:07:27.024 "data_offset": 0, 00:07:27.024 "data_size": 63488 00:07:27.024 }, 00:07:27.024 { 00:07:27.024 "name": "BaseBdev2", 00:07:27.024 "uuid": "b0b77b26-ad06-5b33-9b1e-be1e16164a18", 00:07:27.024 "is_configured": true, 00:07:27.024 "data_offset": 2048, 00:07:27.024 "data_size": 63488 00:07:27.024 } 00:07:27.024 ] 00:07:27.024 }' 00:07:27.024 11:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.024 11:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.284 11:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:27.284 11:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.284 11:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.284 [2024-12-06 11:50:24.986158] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:27.284 [2024-12-06 11:50:24.986195] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:27.284 [2024-12-06 11:50:24.988583] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:27.284 [2024-12-06 11:50:24.988631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.284 [2024-12-06 11:50:24.988682] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:27.284 [2024-12-06 11:50:24.988693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:27.284 { 00:07:27.284 "results": [ 00:07:27.284 { 00:07:27.284 "job": "raid_bdev1", 00:07:27.284 "core_mask": "0x1", 00:07:27.284 "workload": "randrw", 00:07:27.284 "percentage": 50, 00:07:27.284 "status": "finished", 00:07:27.284 "queue_depth": 1, 00:07:27.284 "io_size": 131072, 00:07:27.284 "runtime": 1.332973, 00:07:27.284 "iops": 23945.721331189754, 00:07:27.284 "mibps": 2993.2151663987192, 00:07:27.284 "io_failed": 0, 00:07:27.284 "io_timeout": 0, 00:07:27.284 "avg_latency_us": 39.34349039346457, 00:07:27.284 "min_latency_us": 20.90480349344978, 00:07:27.284 "max_latency_us": 1409.4532751091704 00:07:27.284 } 00:07:27.284 ], 00:07:27.284 "core_count": 1 00:07:27.284 } 00:07:27.284 11:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.284 11:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74598 00:07:27.284 11:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 74598 ']' 00:07:27.284 11:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 74598 00:07:27.284 11:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:27.284 11:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.284 11:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74598 00:07:27.284 killing process with pid 74598 00:07:27.284 11:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:27.284 11:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:27.284 11:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74598' 00:07:27.284 11:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 74598 00:07:27.284 [2024-12-06 11:50:25.032077] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:27.284 11:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 74598 00:07:27.544 [2024-12-06 11:50:25.048061] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:27.544 11:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.q6AiD4x1uG 00:07:27.544 11:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:27.544 11:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:27.544 11:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:27.544 11:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:27.544 11:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:27.544 11:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:27.544 11:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:27.544 00:07:27.544 real 0m3.193s 00:07:27.544 user 0m4.041s 00:07:27.544 sys 0m0.482s 00:07:27.544 11:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.544 11:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.544 ************************************ 00:07:27.544 END TEST raid_write_error_test 00:07:27.544 ************************************ 00:07:27.803 11:50:25 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:27.803 11:50:25 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:27.803 11:50:25 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:07:27.803 11:50:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:27.803 11:50:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.803 11:50:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:27.803 ************************************ 00:07:27.803 START TEST raid_state_function_test 00:07:27.803 ************************************ 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74725 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74725' 00:07:27.803 Process raid pid: 74725 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74725 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 74725 ']' 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.803 11:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.803 [2024-12-06 11:50:25.449314] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:27.803 [2024-12-06 11:50:25.449529] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.062 [2024-12-06 11:50:25.595354] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.062 [2024-12-06 11:50:25.640738] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.062 [2024-12-06 11:50:25.683504] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.062 [2024-12-06 11:50:25.683618] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.632 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.632 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:28.632 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:28.632 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.633 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.633 [2024-12-06 11:50:26.264757] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:28.633 [2024-12-06 11:50:26.264875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:28.633 [2024-12-06 11:50:26.264910] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:28.633 [2024-12-06 11:50:26.264934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:28.633 [2024-12-06 11:50:26.264952] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:28.633 [2024-12-06 11:50:26.264977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:28.633 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.633 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:28.633 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.633 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:28.633 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:28.633 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.633 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:28.633 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.633 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.633 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.633 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.633 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.633 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.633 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.633 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.633 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.633 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.633 "name": "Existed_Raid", 00:07:28.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.633 "strip_size_kb": 64, 00:07:28.633 "state": "configuring", 00:07:28.633 "raid_level": "raid0", 00:07:28.633 "superblock": false, 00:07:28.633 "num_base_bdevs": 3, 00:07:28.633 "num_base_bdevs_discovered": 0, 00:07:28.633 "num_base_bdevs_operational": 3, 00:07:28.633 "base_bdevs_list": [ 00:07:28.633 { 00:07:28.633 "name": "BaseBdev1", 00:07:28.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.633 "is_configured": false, 00:07:28.633 "data_offset": 0, 00:07:28.633 "data_size": 0 00:07:28.633 }, 00:07:28.633 { 00:07:28.633 "name": "BaseBdev2", 00:07:28.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.633 "is_configured": false, 00:07:28.633 "data_offset": 0, 00:07:28.633 "data_size": 0 00:07:28.633 }, 00:07:28.633 { 00:07:28.633 "name": "BaseBdev3", 00:07:28.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.633 "is_configured": false, 00:07:28.633 "data_offset": 0, 00:07:28.633 "data_size": 0 00:07:28.633 } 00:07:28.633 ] 00:07:28.633 }' 00:07:28.633 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.633 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.204 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:29.204 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.204 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.205 [2024-12-06 11:50:26.675996] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:29.205 [2024-12-06 11:50:26.676084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.205 [2024-12-06 11:50:26.687989] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:29.205 [2024-12-06 11:50:26.688033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:29.205 [2024-12-06 11:50:26.688041] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:29.205 [2024-12-06 11:50:26.688050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:29.205 [2024-12-06 11:50:26.688056] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:29.205 [2024-12-06 11:50:26.688064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.205 [2024-12-06 11:50:26.708984] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:29.205 BaseBdev1 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.205 [ 00:07:29.205 { 00:07:29.205 "name": "BaseBdev1", 00:07:29.205 "aliases": [ 00:07:29.205 "6b7a909b-dc6b-4cd8-89a9-7bf59f8d4a64" 00:07:29.205 ], 00:07:29.205 "product_name": "Malloc disk", 00:07:29.205 "block_size": 512, 00:07:29.205 "num_blocks": 65536, 00:07:29.205 "uuid": "6b7a909b-dc6b-4cd8-89a9-7bf59f8d4a64", 00:07:29.205 "assigned_rate_limits": { 00:07:29.205 "rw_ios_per_sec": 0, 00:07:29.205 "rw_mbytes_per_sec": 0, 00:07:29.205 "r_mbytes_per_sec": 0, 00:07:29.205 "w_mbytes_per_sec": 0 00:07:29.205 }, 00:07:29.205 "claimed": true, 00:07:29.205 "claim_type": "exclusive_write", 00:07:29.205 "zoned": false, 00:07:29.205 "supported_io_types": { 00:07:29.205 "read": true, 00:07:29.205 "write": true, 00:07:29.205 "unmap": true, 00:07:29.205 "flush": true, 00:07:29.205 "reset": true, 00:07:29.205 "nvme_admin": false, 00:07:29.205 "nvme_io": false, 00:07:29.205 "nvme_io_md": false, 00:07:29.205 "write_zeroes": true, 00:07:29.205 "zcopy": true, 00:07:29.205 "get_zone_info": false, 00:07:29.205 "zone_management": false, 00:07:29.205 "zone_append": false, 00:07:29.205 "compare": false, 00:07:29.205 "compare_and_write": false, 00:07:29.205 "abort": true, 00:07:29.205 "seek_hole": false, 00:07:29.205 "seek_data": false, 00:07:29.205 "copy": true, 00:07:29.205 "nvme_iov_md": false 00:07:29.205 }, 00:07:29.205 "memory_domains": [ 00:07:29.205 { 00:07:29.205 "dma_device_id": "system", 00:07:29.205 "dma_device_type": 1 00:07:29.205 }, 00:07:29.205 { 00:07:29.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.205 "dma_device_type": 2 00:07:29.205 } 00:07:29.205 ], 00:07:29.205 "driver_specific": {} 00:07:29.205 } 00:07:29.205 ] 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.205 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.205 "name": "Existed_Raid", 00:07:29.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.205 "strip_size_kb": 64, 00:07:29.205 "state": "configuring", 00:07:29.205 "raid_level": "raid0", 00:07:29.205 "superblock": false, 00:07:29.205 "num_base_bdevs": 3, 00:07:29.205 "num_base_bdevs_discovered": 1, 00:07:29.205 "num_base_bdevs_operational": 3, 00:07:29.205 "base_bdevs_list": [ 00:07:29.205 { 00:07:29.205 "name": "BaseBdev1", 00:07:29.205 "uuid": "6b7a909b-dc6b-4cd8-89a9-7bf59f8d4a64", 00:07:29.205 "is_configured": true, 00:07:29.205 "data_offset": 0, 00:07:29.205 "data_size": 65536 00:07:29.205 }, 00:07:29.205 { 00:07:29.205 "name": "BaseBdev2", 00:07:29.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.206 "is_configured": false, 00:07:29.206 "data_offset": 0, 00:07:29.206 "data_size": 0 00:07:29.206 }, 00:07:29.206 { 00:07:29.206 "name": "BaseBdev3", 00:07:29.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.206 "is_configured": false, 00:07:29.206 "data_offset": 0, 00:07:29.206 "data_size": 0 00:07:29.206 } 00:07:29.206 ] 00:07:29.206 }' 00:07:29.206 11:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.206 11:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.466 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:29.466 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.466 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.466 [2024-12-06 11:50:27.188207] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:29.466 [2024-12-06 11:50:27.188263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:29.466 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.466 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:29.466 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.466 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.466 [2024-12-06 11:50:27.200253] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:29.466 [2024-12-06 11:50:27.201996] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:29.466 [2024-12-06 11:50:27.202042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:29.466 [2024-12-06 11:50:27.202051] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:29.466 [2024-12-06 11:50:27.202061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:29.466 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.466 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:29.466 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:29.466 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:29.466 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.466 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:29.466 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.466 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.466 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:29.466 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.466 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.466 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.466 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.466 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.466 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.466 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.466 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.726 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.726 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.726 "name": "Existed_Raid", 00:07:29.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.727 "strip_size_kb": 64, 00:07:29.727 "state": "configuring", 00:07:29.727 "raid_level": "raid0", 00:07:29.727 "superblock": false, 00:07:29.727 "num_base_bdevs": 3, 00:07:29.727 "num_base_bdevs_discovered": 1, 00:07:29.727 "num_base_bdevs_operational": 3, 00:07:29.727 "base_bdevs_list": [ 00:07:29.727 { 00:07:29.727 "name": "BaseBdev1", 00:07:29.727 "uuid": "6b7a909b-dc6b-4cd8-89a9-7bf59f8d4a64", 00:07:29.727 "is_configured": true, 00:07:29.727 "data_offset": 0, 00:07:29.727 "data_size": 65536 00:07:29.727 }, 00:07:29.727 { 00:07:29.727 "name": "BaseBdev2", 00:07:29.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.727 "is_configured": false, 00:07:29.727 "data_offset": 0, 00:07:29.727 "data_size": 0 00:07:29.727 }, 00:07:29.727 { 00:07:29.727 "name": "BaseBdev3", 00:07:29.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.727 "is_configured": false, 00:07:29.727 "data_offset": 0, 00:07:29.727 "data_size": 0 00:07:29.727 } 00:07:29.727 ] 00:07:29.727 }' 00:07:29.727 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.727 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.986 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:29.986 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.986 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.986 [2024-12-06 11:50:27.650883] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:29.986 BaseBdev2 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.987 [ 00:07:29.987 { 00:07:29.987 "name": "BaseBdev2", 00:07:29.987 "aliases": [ 00:07:29.987 "a9c735fb-4962-48b0-b5ac-417eca2a1333" 00:07:29.987 ], 00:07:29.987 "product_name": "Malloc disk", 00:07:29.987 "block_size": 512, 00:07:29.987 "num_blocks": 65536, 00:07:29.987 "uuid": "a9c735fb-4962-48b0-b5ac-417eca2a1333", 00:07:29.987 "assigned_rate_limits": { 00:07:29.987 "rw_ios_per_sec": 0, 00:07:29.987 "rw_mbytes_per_sec": 0, 00:07:29.987 "r_mbytes_per_sec": 0, 00:07:29.987 "w_mbytes_per_sec": 0 00:07:29.987 }, 00:07:29.987 "claimed": true, 00:07:29.987 "claim_type": "exclusive_write", 00:07:29.987 "zoned": false, 00:07:29.987 "supported_io_types": { 00:07:29.987 "read": true, 00:07:29.987 "write": true, 00:07:29.987 "unmap": true, 00:07:29.987 "flush": true, 00:07:29.987 "reset": true, 00:07:29.987 "nvme_admin": false, 00:07:29.987 "nvme_io": false, 00:07:29.987 "nvme_io_md": false, 00:07:29.987 "write_zeroes": true, 00:07:29.987 "zcopy": true, 00:07:29.987 "get_zone_info": false, 00:07:29.987 "zone_management": false, 00:07:29.987 "zone_append": false, 00:07:29.987 "compare": false, 00:07:29.987 "compare_and_write": false, 00:07:29.987 "abort": true, 00:07:29.987 "seek_hole": false, 00:07:29.987 "seek_data": false, 00:07:29.987 "copy": true, 00:07:29.987 "nvme_iov_md": false 00:07:29.987 }, 00:07:29.987 "memory_domains": [ 00:07:29.987 { 00:07:29.987 "dma_device_id": "system", 00:07:29.987 "dma_device_type": 1 00:07:29.987 }, 00:07:29.987 { 00:07:29.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.987 "dma_device_type": 2 00:07:29.987 } 00:07:29.987 ], 00:07:29.987 "driver_specific": {} 00:07:29.987 } 00:07:29.987 ] 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.987 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.248 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.248 "name": "Existed_Raid", 00:07:30.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.248 "strip_size_kb": 64, 00:07:30.248 "state": "configuring", 00:07:30.248 "raid_level": "raid0", 00:07:30.248 "superblock": false, 00:07:30.248 "num_base_bdevs": 3, 00:07:30.248 "num_base_bdevs_discovered": 2, 00:07:30.248 "num_base_bdevs_operational": 3, 00:07:30.248 "base_bdevs_list": [ 00:07:30.248 { 00:07:30.248 "name": "BaseBdev1", 00:07:30.248 "uuid": "6b7a909b-dc6b-4cd8-89a9-7bf59f8d4a64", 00:07:30.248 "is_configured": true, 00:07:30.248 "data_offset": 0, 00:07:30.248 "data_size": 65536 00:07:30.248 }, 00:07:30.248 { 00:07:30.248 "name": "BaseBdev2", 00:07:30.248 "uuid": "a9c735fb-4962-48b0-b5ac-417eca2a1333", 00:07:30.248 "is_configured": true, 00:07:30.248 "data_offset": 0, 00:07:30.248 "data_size": 65536 00:07:30.248 }, 00:07:30.248 { 00:07:30.248 "name": "BaseBdev3", 00:07:30.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.248 "is_configured": false, 00:07:30.248 "data_offset": 0, 00:07:30.248 "data_size": 0 00:07:30.248 } 00:07:30.248 ] 00:07:30.248 }' 00:07:30.248 11:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.248 11:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.508 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:30.508 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.508 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.508 [2024-12-06 11:50:28.121315] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:30.508 [2024-12-06 11:50:28.121418] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:30.508 [2024-12-06 11:50:28.121446] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:30.508 [2024-12-06 11:50:28.121725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:30.508 [2024-12-06 11:50:28.121866] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:30.508 [2024-12-06 11:50:28.121880] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:30.508 [2024-12-06 11:50:28.122081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.508 BaseBdev3 00:07:30.508 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.508 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:30.508 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:30.508 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:30.508 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:30.508 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:30.508 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:30.508 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:30.508 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.508 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.508 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.508 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:30.508 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.508 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.508 [ 00:07:30.508 { 00:07:30.508 "name": "BaseBdev3", 00:07:30.508 "aliases": [ 00:07:30.508 "2110e731-716f-47ca-b8f2-d7b2cf090348" 00:07:30.508 ], 00:07:30.508 "product_name": "Malloc disk", 00:07:30.508 "block_size": 512, 00:07:30.508 "num_blocks": 65536, 00:07:30.508 "uuid": "2110e731-716f-47ca-b8f2-d7b2cf090348", 00:07:30.508 "assigned_rate_limits": { 00:07:30.508 "rw_ios_per_sec": 0, 00:07:30.508 "rw_mbytes_per_sec": 0, 00:07:30.508 "r_mbytes_per_sec": 0, 00:07:30.508 "w_mbytes_per_sec": 0 00:07:30.508 }, 00:07:30.508 "claimed": true, 00:07:30.508 "claim_type": "exclusive_write", 00:07:30.508 "zoned": false, 00:07:30.508 "supported_io_types": { 00:07:30.508 "read": true, 00:07:30.508 "write": true, 00:07:30.508 "unmap": true, 00:07:30.508 "flush": true, 00:07:30.508 "reset": true, 00:07:30.508 "nvme_admin": false, 00:07:30.508 "nvme_io": false, 00:07:30.508 "nvme_io_md": false, 00:07:30.508 "write_zeroes": true, 00:07:30.508 "zcopy": true, 00:07:30.508 "get_zone_info": false, 00:07:30.508 "zone_management": false, 00:07:30.508 "zone_append": false, 00:07:30.508 "compare": false, 00:07:30.508 "compare_and_write": false, 00:07:30.508 "abort": true, 00:07:30.508 "seek_hole": false, 00:07:30.508 "seek_data": false, 00:07:30.508 "copy": true, 00:07:30.508 "nvme_iov_md": false 00:07:30.508 }, 00:07:30.508 "memory_domains": [ 00:07:30.508 { 00:07:30.508 "dma_device_id": "system", 00:07:30.508 "dma_device_type": 1 00:07:30.508 }, 00:07:30.508 { 00:07:30.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.508 "dma_device_type": 2 00:07:30.508 } 00:07:30.508 ], 00:07:30.508 "driver_specific": {} 00:07:30.508 } 00:07:30.508 ] 00:07:30.508 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.508 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:30.508 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:30.509 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:30.509 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:30.509 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:30.509 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:30.509 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:30.509 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.509 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:30.509 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.509 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.509 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.509 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.509 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.509 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:30.509 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.509 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.509 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.509 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.509 "name": "Existed_Raid", 00:07:30.509 "uuid": "876a4e7b-354e-4ac0-a6e6-3521d014a7d4", 00:07:30.509 "strip_size_kb": 64, 00:07:30.509 "state": "online", 00:07:30.509 "raid_level": "raid0", 00:07:30.509 "superblock": false, 00:07:30.509 "num_base_bdevs": 3, 00:07:30.509 "num_base_bdevs_discovered": 3, 00:07:30.509 "num_base_bdevs_operational": 3, 00:07:30.509 "base_bdevs_list": [ 00:07:30.509 { 00:07:30.509 "name": "BaseBdev1", 00:07:30.509 "uuid": "6b7a909b-dc6b-4cd8-89a9-7bf59f8d4a64", 00:07:30.509 "is_configured": true, 00:07:30.509 "data_offset": 0, 00:07:30.509 "data_size": 65536 00:07:30.509 }, 00:07:30.509 { 00:07:30.509 "name": "BaseBdev2", 00:07:30.509 "uuid": "a9c735fb-4962-48b0-b5ac-417eca2a1333", 00:07:30.509 "is_configured": true, 00:07:30.509 "data_offset": 0, 00:07:30.509 "data_size": 65536 00:07:30.509 }, 00:07:30.509 { 00:07:30.509 "name": "BaseBdev3", 00:07:30.509 "uuid": "2110e731-716f-47ca-b8f2-d7b2cf090348", 00:07:30.509 "is_configured": true, 00:07:30.509 "data_offset": 0, 00:07:30.509 "data_size": 65536 00:07:30.509 } 00:07:30.509 ] 00:07:30.509 }' 00:07:30.509 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.509 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.078 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:31.078 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:31.078 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:31.078 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:31.078 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:31.078 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:31.078 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:31.078 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.078 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.078 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:31.078 [2024-12-06 11:50:28.600792] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.078 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.078 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:31.078 "name": "Existed_Raid", 00:07:31.078 "aliases": [ 00:07:31.078 "876a4e7b-354e-4ac0-a6e6-3521d014a7d4" 00:07:31.078 ], 00:07:31.078 "product_name": "Raid Volume", 00:07:31.078 "block_size": 512, 00:07:31.078 "num_blocks": 196608, 00:07:31.078 "uuid": "876a4e7b-354e-4ac0-a6e6-3521d014a7d4", 00:07:31.078 "assigned_rate_limits": { 00:07:31.078 "rw_ios_per_sec": 0, 00:07:31.078 "rw_mbytes_per_sec": 0, 00:07:31.078 "r_mbytes_per_sec": 0, 00:07:31.078 "w_mbytes_per_sec": 0 00:07:31.078 }, 00:07:31.078 "claimed": false, 00:07:31.078 "zoned": false, 00:07:31.078 "supported_io_types": { 00:07:31.078 "read": true, 00:07:31.078 "write": true, 00:07:31.078 "unmap": true, 00:07:31.078 "flush": true, 00:07:31.078 "reset": true, 00:07:31.078 "nvme_admin": false, 00:07:31.078 "nvme_io": false, 00:07:31.078 "nvme_io_md": false, 00:07:31.078 "write_zeroes": true, 00:07:31.078 "zcopy": false, 00:07:31.078 "get_zone_info": false, 00:07:31.078 "zone_management": false, 00:07:31.078 "zone_append": false, 00:07:31.078 "compare": false, 00:07:31.078 "compare_and_write": false, 00:07:31.078 "abort": false, 00:07:31.078 "seek_hole": false, 00:07:31.078 "seek_data": false, 00:07:31.078 "copy": false, 00:07:31.078 "nvme_iov_md": false 00:07:31.078 }, 00:07:31.078 "memory_domains": [ 00:07:31.078 { 00:07:31.078 "dma_device_id": "system", 00:07:31.078 "dma_device_type": 1 00:07:31.078 }, 00:07:31.078 { 00:07:31.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.078 "dma_device_type": 2 00:07:31.078 }, 00:07:31.078 { 00:07:31.078 "dma_device_id": "system", 00:07:31.078 "dma_device_type": 1 00:07:31.078 }, 00:07:31.078 { 00:07:31.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.078 "dma_device_type": 2 00:07:31.078 }, 00:07:31.078 { 00:07:31.078 "dma_device_id": "system", 00:07:31.078 "dma_device_type": 1 00:07:31.078 }, 00:07:31.078 { 00:07:31.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.078 "dma_device_type": 2 00:07:31.078 } 00:07:31.078 ], 00:07:31.078 "driver_specific": { 00:07:31.078 "raid": { 00:07:31.078 "uuid": "876a4e7b-354e-4ac0-a6e6-3521d014a7d4", 00:07:31.078 "strip_size_kb": 64, 00:07:31.078 "state": "online", 00:07:31.078 "raid_level": "raid0", 00:07:31.078 "superblock": false, 00:07:31.078 "num_base_bdevs": 3, 00:07:31.078 "num_base_bdevs_discovered": 3, 00:07:31.078 "num_base_bdevs_operational": 3, 00:07:31.078 "base_bdevs_list": [ 00:07:31.079 { 00:07:31.079 "name": "BaseBdev1", 00:07:31.079 "uuid": "6b7a909b-dc6b-4cd8-89a9-7bf59f8d4a64", 00:07:31.079 "is_configured": true, 00:07:31.079 "data_offset": 0, 00:07:31.079 "data_size": 65536 00:07:31.079 }, 00:07:31.079 { 00:07:31.079 "name": "BaseBdev2", 00:07:31.079 "uuid": "a9c735fb-4962-48b0-b5ac-417eca2a1333", 00:07:31.079 "is_configured": true, 00:07:31.079 "data_offset": 0, 00:07:31.079 "data_size": 65536 00:07:31.079 }, 00:07:31.079 { 00:07:31.079 "name": "BaseBdev3", 00:07:31.079 "uuid": "2110e731-716f-47ca-b8f2-d7b2cf090348", 00:07:31.079 "is_configured": true, 00:07:31.079 "data_offset": 0, 00:07:31.079 "data_size": 65536 00:07:31.079 } 00:07:31.079 ] 00:07:31.079 } 00:07:31.079 } 00:07:31.079 }' 00:07:31.079 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:31.079 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:31.079 BaseBdev2 00:07:31.079 BaseBdev3' 00:07:31.079 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.079 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:31.079 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:31.079 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:31.079 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.079 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.079 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.079 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.079 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:31.079 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:31.079 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:31.079 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:31.079 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.079 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.079 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.079 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.079 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:31.079 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:31.079 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:31.079 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.079 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:31.079 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.079 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.079 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.339 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:31.339 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:31.339 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:31.339 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.339 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.339 [2024-12-06 11:50:28.860114] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:31.339 [2024-12-06 11:50:28.860138] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:31.339 [2024-12-06 11:50:28.860198] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:31.339 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.339 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:31.339 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:31.339 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:31.339 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:31.339 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:31.339 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:07:31.339 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:31.339 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:31.339 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:31.339 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.339 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.339 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.339 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.339 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.339 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.339 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.339 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.339 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.340 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.340 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.340 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.340 "name": "Existed_Raid", 00:07:31.340 "uuid": "876a4e7b-354e-4ac0-a6e6-3521d014a7d4", 00:07:31.340 "strip_size_kb": 64, 00:07:31.340 "state": "offline", 00:07:31.340 "raid_level": "raid0", 00:07:31.340 "superblock": false, 00:07:31.340 "num_base_bdevs": 3, 00:07:31.340 "num_base_bdevs_discovered": 2, 00:07:31.340 "num_base_bdevs_operational": 2, 00:07:31.340 "base_bdevs_list": [ 00:07:31.340 { 00:07:31.340 "name": null, 00:07:31.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.340 "is_configured": false, 00:07:31.340 "data_offset": 0, 00:07:31.340 "data_size": 65536 00:07:31.340 }, 00:07:31.340 { 00:07:31.340 "name": "BaseBdev2", 00:07:31.340 "uuid": "a9c735fb-4962-48b0-b5ac-417eca2a1333", 00:07:31.340 "is_configured": true, 00:07:31.340 "data_offset": 0, 00:07:31.340 "data_size": 65536 00:07:31.340 }, 00:07:31.340 { 00:07:31.340 "name": "BaseBdev3", 00:07:31.340 "uuid": "2110e731-716f-47ca-b8f2-d7b2cf090348", 00:07:31.340 "is_configured": true, 00:07:31.340 "data_offset": 0, 00:07:31.340 "data_size": 65536 00:07:31.340 } 00:07:31.340 ] 00:07:31.340 }' 00:07:31.340 11:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.340 11:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.600 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:31.600 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:31.600 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.600 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:31.600 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.600 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.600 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.600 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:31.600 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:31.600 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:31.600 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.600 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.600 [2024-12-06 11:50:29.314840] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:31.600 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.600 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:31.600 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:31.600 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.600 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:31.600 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.600 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.600 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.860 [2024-12-06 11:50:29.385982] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:31.860 [2024-12-06 11:50:29.386092] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.860 BaseBdev2 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.860 [ 00:07:31.860 { 00:07:31.860 "name": "BaseBdev2", 00:07:31.860 "aliases": [ 00:07:31.860 "81a733e2-1fe9-4b63-8958-e53618072e61" 00:07:31.860 ], 00:07:31.860 "product_name": "Malloc disk", 00:07:31.860 "block_size": 512, 00:07:31.860 "num_blocks": 65536, 00:07:31.860 "uuid": "81a733e2-1fe9-4b63-8958-e53618072e61", 00:07:31.860 "assigned_rate_limits": { 00:07:31.860 "rw_ios_per_sec": 0, 00:07:31.860 "rw_mbytes_per_sec": 0, 00:07:31.860 "r_mbytes_per_sec": 0, 00:07:31.860 "w_mbytes_per_sec": 0 00:07:31.860 }, 00:07:31.860 "claimed": false, 00:07:31.860 "zoned": false, 00:07:31.860 "supported_io_types": { 00:07:31.860 "read": true, 00:07:31.860 "write": true, 00:07:31.860 "unmap": true, 00:07:31.860 "flush": true, 00:07:31.860 "reset": true, 00:07:31.860 "nvme_admin": false, 00:07:31.860 "nvme_io": false, 00:07:31.860 "nvme_io_md": false, 00:07:31.860 "write_zeroes": true, 00:07:31.860 "zcopy": true, 00:07:31.860 "get_zone_info": false, 00:07:31.860 "zone_management": false, 00:07:31.860 "zone_append": false, 00:07:31.860 "compare": false, 00:07:31.860 "compare_and_write": false, 00:07:31.860 "abort": true, 00:07:31.860 "seek_hole": false, 00:07:31.860 "seek_data": false, 00:07:31.860 "copy": true, 00:07:31.860 "nvme_iov_md": false 00:07:31.860 }, 00:07:31.860 "memory_domains": [ 00:07:31.860 { 00:07:31.860 "dma_device_id": "system", 00:07:31.860 "dma_device_type": 1 00:07:31.860 }, 00:07:31.860 { 00:07:31.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.860 "dma_device_type": 2 00:07:31.860 } 00:07:31.860 ], 00:07:31.860 "driver_specific": {} 00:07:31.860 } 00:07:31.860 ] 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:31.860 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.861 BaseBdev3 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.861 [ 00:07:31.861 { 00:07:31.861 "name": "BaseBdev3", 00:07:31.861 "aliases": [ 00:07:31.861 "052bf1a6-07c5-4f0c-b2e1-04f8a2647732" 00:07:31.861 ], 00:07:31.861 "product_name": "Malloc disk", 00:07:31.861 "block_size": 512, 00:07:31.861 "num_blocks": 65536, 00:07:31.861 "uuid": "052bf1a6-07c5-4f0c-b2e1-04f8a2647732", 00:07:31.861 "assigned_rate_limits": { 00:07:31.861 "rw_ios_per_sec": 0, 00:07:31.861 "rw_mbytes_per_sec": 0, 00:07:31.861 "r_mbytes_per_sec": 0, 00:07:31.861 "w_mbytes_per_sec": 0 00:07:31.861 }, 00:07:31.861 "claimed": false, 00:07:31.861 "zoned": false, 00:07:31.861 "supported_io_types": { 00:07:31.861 "read": true, 00:07:31.861 "write": true, 00:07:31.861 "unmap": true, 00:07:31.861 "flush": true, 00:07:31.861 "reset": true, 00:07:31.861 "nvme_admin": false, 00:07:31.861 "nvme_io": false, 00:07:31.861 "nvme_io_md": false, 00:07:31.861 "write_zeroes": true, 00:07:31.861 "zcopy": true, 00:07:31.861 "get_zone_info": false, 00:07:31.861 "zone_management": false, 00:07:31.861 "zone_append": false, 00:07:31.861 "compare": false, 00:07:31.861 "compare_and_write": false, 00:07:31.861 "abort": true, 00:07:31.861 "seek_hole": false, 00:07:31.861 "seek_data": false, 00:07:31.861 "copy": true, 00:07:31.861 "nvme_iov_md": false 00:07:31.861 }, 00:07:31.861 "memory_domains": [ 00:07:31.861 { 00:07:31.861 "dma_device_id": "system", 00:07:31.861 "dma_device_type": 1 00:07:31.861 }, 00:07:31.861 { 00:07:31.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.861 "dma_device_type": 2 00:07:31.861 } 00:07:31.861 ], 00:07:31.861 "driver_specific": {} 00:07:31.861 } 00:07:31.861 ] 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.861 [2024-12-06 11:50:29.568724] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:31.861 [2024-12-06 11:50:29.568825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:31.861 [2024-12-06 11:50:29.568851] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:31.861 [2024-12-06 11:50:29.570616] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.861 "name": "Existed_Raid", 00:07:31.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.861 "strip_size_kb": 64, 00:07:31.861 "state": "configuring", 00:07:31.861 "raid_level": "raid0", 00:07:31.861 "superblock": false, 00:07:31.861 "num_base_bdevs": 3, 00:07:31.861 "num_base_bdevs_discovered": 2, 00:07:31.861 "num_base_bdevs_operational": 3, 00:07:31.861 "base_bdevs_list": [ 00:07:31.861 { 00:07:31.861 "name": "BaseBdev1", 00:07:31.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.861 "is_configured": false, 00:07:31.861 "data_offset": 0, 00:07:31.861 "data_size": 0 00:07:31.861 }, 00:07:31.861 { 00:07:31.861 "name": "BaseBdev2", 00:07:31.861 "uuid": "81a733e2-1fe9-4b63-8958-e53618072e61", 00:07:31.861 "is_configured": true, 00:07:31.861 "data_offset": 0, 00:07:31.861 "data_size": 65536 00:07:31.861 }, 00:07:31.861 { 00:07:31.861 "name": "BaseBdev3", 00:07:31.861 "uuid": "052bf1a6-07c5-4f0c-b2e1-04f8a2647732", 00:07:31.861 "is_configured": true, 00:07:31.861 "data_offset": 0, 00:07:31.861 "data_size": 65536 00:07:31.861 } 00:07:31.861 ] 00:07:31.861 }' 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.861 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.431 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:32.431 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.431 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.431 [2024-12-06 11:50:29.972016] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:32.431 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.431 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:32.431 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.431 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.431 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:32.431 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.431 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:32.431 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.431 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.431 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.431 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.431 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.431 11:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.431 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.431 11:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.431 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.431 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.431 "name": "Existed_Raid", 00:07:32.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.431 "strip_size_kb": 64, 00:07:32.431 "state": "configuring", 00:07:32.431 "raid_level": "raid0", 00:07:32.431 "superblock": false, 00:07:32.431 "num_base_bdevs": 3, 00:07:32.431 "num_base_bdevs_discovered": 1, 00:07:32.431 "num_base_bdevs_operational": 3, 00:07:32.431 "base_bdevs_list": [ 00:07:32.431 { 00:07:32.431 "name": "BaseBdev1", 00:07:32.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.431 "is_configured": false, 00:07:32.431 "data_offset": 0, 00:07:32.431 "data_size": 0 00:07:32.431 }, 00:07:32.431 { 00:07:32.431 "name": null, 00:07:32.431 "uuid": "81a733e2-1fe9-4b63-8958-e53618072e61", 00:07:32.431 "is_configured": false, 00:07:32.431 "data_offset": 0, 00:07:32.431 "data_size": 65536 00:07:32.431 }, 00:07:32.431 { 00:07:32.431 "name": "BaseBdev3", 00:07:32.431 "uuid": "052bf1a6-07c5-4f0c-b2e1-04f8a2647732", 00:07:32.431 "is_configured": true, 00:07:32.431 "data_offset": 0, 00:07:32.431 "data_size": 65536 00:07:32.431 } 00:07:32.431 ] 00:07:32.431 }' 00:07:32.431 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.431 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.691 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.691 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.691 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.691 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:32.691 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.691 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:32.691 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:32.691 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.691 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.691 [2024-12-06 11:50:30.442285] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:32.691 BaseBdev1 00:07:32.691 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.691 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:32.691 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:32.691 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:32.691 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:32.691 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:32.691 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:32.691 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:32.691 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.691 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.950 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.950 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:32.950 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.950 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.950 [ 00:07:32.950 { 00:07:32.950 "name": "BaseBdev1", 00:07:32.950 "aliases": [ 00:07:32.950 "286db80c-761c-49c1-8d1b-cfdb7127a5e7" 00:07:32.950 ], 00:07:32.950 "product_name": "Malloc disk", 00:07:32.950 "block_size": 512, 00:07:32.950 "num_blocks": 65536, 00:07:32.950 "uuid": "286db80c-761c-49c1-8d1b-cfdb7127a5e7", 00:07:32.950 "assigned_rate_limits": { 00:07:32.950 "rw_ios_per_sec": 0, 00:07:32.950 "rw_mbytes_per_sec": 0, 00:07:32.950 "r_mbytes_per_sec": 0, 00:07:32.950 "w_mbytes_per_sec": 0 00:07:32.950 }, 00:07:32.950 "claimed": true, 00:07:32.950 "claim_type": "exclusive_write", 00:07:32.950 "zoned": false, 00:07:32.950 "supported_io_types": { 00:07:32.950 "read": true, 00:07:32.950 "write": true, 00:07:32.950 "unmap": true, 00:07:32.950 "flush": true, 00:07:32.950 "reset": true, 00:07:32.950 "nvme_admin": false, 00:07:32.950 "nvme_io": false, 00:07:32.950 "nvme_io_md": false, 00:07:32.950 "write_zeroes": true, 00:07:32.950 "zcopy": true, 00:07:32.950 "get_zone_info": false, 00:07:32.950 "zone_management": false, 00:07:32.950 "zone_append": false, 00:07:32.950 "compare": false, 00:07:32.950 "compare_and_write": false, 00:07:32.950 "abort": true, 00:07:32.950 "seek_hole": false, 00:07:32.950 "seek_data": false, 00:07:32.950 "copy": true, 00:07:32.950 "nvme_iov_md": false 00:07:32.950 }, 00:07:32.950 "memory_domains": [ 00:07:32.950 { 00:07:32.950 "dma_device_id": "system", 00:07:32.950 "dma_device_type": 1 00:07:32.950 }, 00:07:32.950 { 00:07:32.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.950 "dma_device_type": 2 00:07:32.950 } 00:07:32.950 ], 00:07:32.950 "driver_specific": {} 00:07:32.950 } 00:07:32.950 ] 00:07:32.950 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.950 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:32.950 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:32.950 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.950 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.950 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:32.950 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.950 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:32.950 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.950 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.950 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.950 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.950 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.950 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.950 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.950 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.951 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.951 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.951 "name": "Existed_Raid", 00:07:32.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.951 "strip_size_kb": 64, 00:07:32.951 "state": "configuring", 00:07:32.951 "raid_level": "raid0", 00:07:32.951 "superblock": false, 00:07:32.951 "num_base_bdevs": 3, 00:07:32.951 "num_base_bdevs_discovered": 2, 00:07:32.951 "num_base_bdevs_operational": 3, 00:07:32.951 "base_bdevs_list": [ 00:07:32.951 { 00:07:32.951 "name": "BaseBdev1", 00:07:32.951 "uuid": "286db80c-761c-49c1-8d1b-cfdb7127a5e7", 00:07:32.951 "is_configured": true, 00:07:32.951 "data_offset": 0, 00:07:32.951 "data_size": 65536 00:07:32.951 }, 00:07:32.951 { 00:07:32.951 "name": null, 00:07:32.951 "uuid": "81a733e2-1fe9-4b63-8958-e53618072e61", 00:07:32.951 "is_configured": false, 00:07:32.951 "data_offset": 0, 00:07:32.951 "data_size": 65536 00:07:32.951 }, 00:07:32.951 { 00:07:32.951 "name": "BaseBdev3", 00:07:32.951 "uuid": "052bf1a6-07c5-4f0c-b2e1-04f8a2647732", 00:07:32.951 "is_configured": true, 00:07:32.951 "data_offset": 0, 00:07:32.951 "data_size": 65536 00:07:32.951 } 00:07:32.951 ] 00:07:32.951 }' 00:07:32.951 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.951 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.210 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.210 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.210 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.210 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:33.210 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.210 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:33.210 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:33.210 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.211 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.211 [2024-12-06 11:50:30.941462] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:33.211 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.211 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:33.211 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.211 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.211 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:33.211 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.211 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:33.211 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.211 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.211 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.211 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.211 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.211 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.211 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.211 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.470 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.470 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.470 "name": "Existed_Raid", 00:07:33.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.470 "strip_size_kb": 64, 00:07:33.470 "state": "configuring", 00:07:33.470 "raid_level": "raid0", 00:07:33.470 "superblock": false, 00:07:33.470 "num_base_bdevs": 3, 00:07:33.470 "num_base_bdevs_discovered": 1, 00:07:33.470 "num_base_bdevs_operational": 3, 00:07:33.470 "base_bdevs_list": [ 00:07:33.470 { 00:07:33.470 "name": "BaseBdev1", 00:07:33.470 "uuid": "286db80c-761c-49c1-8d1b-cfdb7127a5e7", 00:07:33.470 "is_configured": true, 00:07:33.470 "data_offset": 0, 00:07:33.470 "data_size": 65536 00:07:33.470 }, 00:07:33.470 { 00:07:33.470 "name": null, 00:07:33.470 "uuid": "81a733e2-1fe9-4b63-8958-e53618072e61", 00:07:33.470 "is_configured": false, 00:07:33.470 "data_offset": 0, 00:07:33.470 "data_size": 65536 00:07:33.470 }, 00:07:33.470 { 00:07:33.470 "name": null, 00:07:33.470 "uuid": "052bf1a6-07c5-4f0c-b2e1-04f8a2647732", 00:07:33.470 "is_configured": false, 00:07:33.470 "data_offset": 0, 00:07:33.470 "data_size": 65536 00:07:33.470 } 00:07:33.470 ] 00:07:33.470 }' 00:07:33.470 11:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.470 11:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.731 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.731 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:33.731 11:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.731 11:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.731 11:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.731 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:33.731 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:33.731 11:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.731 11:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.731 [2024-12-06 11:50:31.384730] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:33.731 11:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.731 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:33.731 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.731 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.731 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:33.731 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.731 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:33.731 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.731 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.731 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.731 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.731 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.731 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.731 11:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.731 11:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.731 11:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.731 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.731 "name": "Existed_Raid", 00:07:33.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.731 "strip_size_kb": 64, 00:07:33.731 "state": "configuring", 00:07:33.731 "raid_level": "raid0", 00:07:33.731 "superblock": false, 00:07:33.731 "num_base_bdevs": 3, 00:07:33.731 "num_base_bdevs_discovered": 2, 00:07:33.731 "num_base_bdevs_operational": 3, 00:07:33.731 "base_bdevs_list": [ 00:07:33.731 { 00:07:33.731 "name": "BaseBdev1", 00:07:33.731 "uuid": "286db80c-761c-49c1-8d1b-cfdb7127a5e7", 00:07:33.731 "is_configured": true, 00:07:33.731 "data_offset": 0, 00:07:33.731 "data_size": 65536 00:07:33.731 }, 00:07:33.731 { 00:07:33.731 "name": null, 00:07:33.731 "uuid": "81a733e2-1fe9-4b63-8958-e53618072e61", 00:07:33.731 "is_configured": false, 00:07:33.731 "data_offset": 0, 00:07:33.731 "data_size": 65536 00:07:33.731 }, 00:07:33.731 { 00:07:33.731 "name": "BaseBdev3", 00:07:33.731 "uuid": "052bf1a6-07c5-4f0c-b2e1-04f8a2647732", 00:07:33.731 "is_configured": true, 00:07:33.731 "data_offset": 0, 00:07:33.731 "data_size": 65536 00:07:33.731 } 00:07:33.731 ] 00:07:33.731 }' 00:07:33.731 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.731 11:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.301 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.301 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:34.301 11:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.301 11:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.301 11:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.301 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:34.301 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:34.301 11:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.301 11:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.301 [2024-12-06 11:50:31.855936] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:34.301 11:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.301 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:34.301 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:34.301 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:34.301 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:34.301 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.301 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:34.301 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.301 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.301 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.301 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.301 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.301 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.301 11:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.301 11:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.301 11:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.301 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.301 "name": "Existed_Raid", 00:07:34.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.301 "strip_size_kb": 64, 00:07:34.301 "state": "configuring", 00:07:34.301 "raid_level": "raid0", 00:07:34.301 "superblock": false, 00:07:34.301 "num_base_bdevs": 3, 00:07:34.301 "num_base_bdevs_discovered": 1, 00:07:34.301 "num_base_bdevs_operational": 3, 00:07:34.301 "base_bdevs_list": [ 00:07:34.301 { 00:07:34.301 "name": null, 00:07:34.301 "uuid": "286db80c-761c-49c1-8d1b-cfdb7127a5e7", 00:07:34.301 "is_configured": false, 00:07:34.301 "data_offset": 0, 00:07:34.301 "data_size": 65536 00:07:34.301 }, 00:07:34.301 { 00:07:34.301 "name": null, 00:07:34.301 "uuid": "81a733e2-1fe9-4b63-8958-e53618072e61", 00:07:34.301 "is_configured": false, 00:07:34.301 "data_offset": 0, 00:07:34.301 "data_size": 65536 00:07:34.301 }, 00:07:34.302 { 00:07:34.302 "name": "BaseBdev3", 00:07:34.302 "uuid": "052bf1a6-07c5-4f0c-b2e1-04f8a2647732", 00:07:34.302 "is_configured": true, 00:07:34.302 "data_offset": 0, 00:07:34.302 "data_size": 65536 00:07:34.302 } 00:07:34.302 ] 00:07:34.302 }' 00:07:34.302 11:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.302 11:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.561 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.561 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.561 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.561 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:34.561 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.820 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:34.820 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:34.820 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.820 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.820 [2024-12-06 11:50:32.341647] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:34.820 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.820 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:34.820 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:34.820 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:34.820 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:34.820 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.820 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:34.820 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.820 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.820 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.820 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.820 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.820 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.820 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.820 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.820 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.820 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.820 "name": "Existed_Raid", 00:07:34.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.820 "strip_size_kb": 64, 00:07:34.820 "state": "configuring", 00:07:34.820 "raid_level": "raid0", 00:07:34.820 "superblock": false, 00:07:34.820 "num_base_bdevs": 3, 00:07:34.820 "num_base_bdevs_discovered": 2, 00:07:34.820 "num_base_bdevs_operational": 3, 00:07:34.820 "base_bdevs_list": [ 00:07:34.820 { 00:07:34.820 "name": null, 00:07:34.820 "uuid": "286db80c-761c-49c1-8d1b-cfdb7127a5e7", 00:07:34.820 "is_configured": false, 00:07:34.820 "data_offset": 0, 00:07:34.820 "data_size": 65536 00:07:34.820 }, 00:07:34.820 { 00:07:34.820 "name": "BaseBdev2", 00:07:34.820 "uuid": "81a733e2-1fe9-4b63-8958-e53618072e61", 00:07:34.820 "is_configured": true, 00:07:34.820 "data_offset": 0, 00:07:34.820 "data_size": 65536 00:07:34.820 }, 00:07:34.820 { 00:07:34.820 "name": "BaseBdev3", 00:07:34.820 "uuid": "052bf1a6-07c5-4f0c-b2e1-04f8a2647732", 00:07:34.820 "is_configured": true, 00:07:34.820 "data_offset": 0, 00:07:34.820 "data_size": 65536 00:07:34.820 } 00:07:34.820 ] 00:07:34.820 }' 00:07:34.820 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.820 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.079 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.080 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.080 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.080 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:35.080 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.080 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:35.080 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.080 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:35.080 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.080 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 286db80c-761c-49c1-8d1b-cfdb7127a5e7 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.340 [2024-12-06 11:50:32.883748] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:35.340 [2024-12-06 11:50:32.883859] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:07:35.340 [2024-12-06 11:50:32.883886] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:35.340 [2024-12-06 11:50:32.884145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:07:35.340 [2024-12-06 11:50:32.884319] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:07:35.340 [2024-12-06 11:50:32.884361] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:07:35.340 [2024-12-06 11:50:32.884583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.340 NewBaseBdev 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.340 [ 00:07:35.340 { 00:07:35.340 "name": "NewBaseBdev", 00:07:35.340 "aliases": [ 00:07:35.340 "286db80c-761c-49c1-8d1b-cfdb7127a5e7" 00:07:35.340 ], 00:07:35.340 "product_name": "Malloc disk", 00:07:35.340 "block_size": 512, 00:07:35.340 "num_blocks": 65536, 00:07:35.340 "uuid": "286db80c-761c-49c1-8d1b-cfdb7127a5e7", 00:07:35.340 "assigned_rate_limits": { 00:07:35.340 "rw_ios_per_sec": 0, 00:07:35.340 "rw_mbytes_per_sec": 0, 00:07:35.340 "r_mbytes_per_sec": 0, 00:07:35.340 "w_mbytes_per_sec": 0 00:07:35.340 }, 00:07:35.340 "claimed": true, 00:07:35.340 "claim_type": "exclusive_write", 00:07:35.340 "zoned": false, 00:07:35.340 "supported_io_types": { 00:07:35.340 "read": true, 00:07:35.340 "write": true, 00:07:35.340 "unmap": true, 00:07:35.340 "flush": true, 00:07:35.340 "reset": true, 00:07:35.340 "nvme_admin": false, 00:07:35.340 "nvme_io": false, 00:07:35.340 "nvme_io_md": false, 00:07:35.340 "write_zeroes": true, 00:07:35.340 "zcopy": true, 00:07:35.340 "get_zone_info": false, 00:07:35.340 "zone_management": false, 00:07:35.340 "zone_append": false, 00:07:35.340 "compare": false, 00:07:35.340 "compare_and_write": false, 00:07:35.340 "abort": true, 00:07:35.340 "seek_hole": false, 00:07:35.340 "seek_data": false, 00:07:35.340 "copy": true, 00:07:35.340 "nvme_iov_md": false 00:07:35.340 }, 00:07:35.340 "memory_domains": [ 00:07:35.340 { 00:07:35.340 "dma_device_id": "system", 00:07:35.340 "dma_device_type": 1 00:07:35.340 }, 00:07:35.340 { 00:07:35.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.340 "dma_device_type": 2 00:07:35.340 } 00:07:35.340 ], 00:07:35.340 "driver_specific": {} 00:07:35.340 } 00:07:35.340 ] 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.340 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.340 "name": "Existed_Raid", 00:07:35.340 "uuid": "a2869cf7-fd01-447c-95f9-1f6a3ca81bdf", 00:07:35.340 "strip_size_kb": 64, 00:07:35.340 "state": "online", 00:07:35.340 "raid_level": "raid0", 00:07:35.340 "superblock": false, 00:07:35.340 "num_base_bdevs": 3, 00:07:35.340 "num_base_bdevs_discovered": 3, 00:07:35.340 "num_base_bdevs_operational": 3, 00:07:35.340 "base_bdevs_list": [ 00:07:35.340 { 00:07:35.340 "name": "NewBaseBdev", 00:07:35.340 "uuid": "286db80c-761c-49c1-8d1b-cfdb7127a5e7", 00:07:35.340 "is_configured": true, 00:07:35.340 "data_offset": 0, 00:07:35.340 "data_size": 65536 00:07:35.340 }, 00:07:35.340 { 00:07:35.340 "name": "BaseBdev2", 00:07:35.340 "uuid": "81a733e2-1fe9-4b63-8958-e53618072e61", 00:07:35.340 "is_configured": true, 00:07:35.340 "data_offset": 0, 00:07:35.340 "data_size": 65536 00:07:35.340 }, 00:07:35.340 { 00:07:35.340 "name": "BaseBdev3", 00:07:35.340 "uuid": "052bf1a6-07c5-4f0c-b2e1-04f8a2647732", 00:07:35.340 "is_configured": true, 00:07:35.340 "data_offset": 0, 00:07:35.341 "data_size": 65536 00:07:35.341 } 00:07:35.341 ] 00:07:35.341 }' 00:07:35.341 11:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.341 11:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.600 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:35.600 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:35.600 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:35.600 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:35.600 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:35.600 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:35.859 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:35.859 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:35.859 11:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.859 11:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.859 [2024-12-06 11:50:33.363449] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:35.859 11:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.859 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:35.859 "name": "Existed_Raid", 00:07:35.859 "aliases": [ 00:07:35.859 "a2869cf7-fd01-447c-95f9-1f6a3ca81bdf" 00:07:35.859 ], 00:07:35.859 "product_name": "Raid Volume", 00:07:35.859 "block_size": 512, 00:07:35.859 "num_blocks": 196608, 00:07:35.859 "uuid": "a2869cf7-fd01-447c-95f9-1f6a3ca81bdf", 00:07:35.859 "assigned_rate_limits": { 00:07:35.859 "rw_ios_per_sec": 0, 00:07:35.859 "rw_mbytes_per_sec": 0, 00:07:35.859 "r_mbytes_per_sec": 0, 00:07:35.859 "w_mbytes_per_sec": 0 00:07:35.859 }, 00:07:35.859 "claimed": false, 00:07:35.859 "zoned": false, 00:07:35.859 "supported_io_types": { 00:07:35.859 "read": true, 00:07:35.859 "write": true, 00:07:35.859 "unmap": true, 00:07:35.859 "flush": true, 00:07:35.859 "reset": true, 00:07:35.859 "nvme_admin": false, 00:07:35.859 "nvme_io": false, 00:07:35.859 "nvme_io_md": false, 00:07:35.859 "write_zeroes": true, 00:07:35.859 "zcopy": false, 00:07:35.859 "get_zone_info": false, 00:07:35.859 "zone_management": false, 00:07:35.859 "zone_append": false, 00:07:35.859 "compare": false, 00:07:35.859 "compare_and_write": false, 00:07:35.859 "abort": false, 00:07:35.859 "seek_hole": false, 00:07:35.859 "seek_data": false, 00:07:35.859 "copy": false, 00:07:35.859 "nvme_iov_md": false 00:07:35.859 }, 00:07:35.859 "memory_domains": [ 00:07:35.859 { 00:07:35.859 "dma_device_id": "system", 00:07:35.859 "dma_device_type": 1 00:07:35.859 }, 00:07:35.859 { 00:07:35.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.859 "dma_device_type": 2 00:07:35.859 }, 00:07:35.859 { 00:07:35.859 "dma_device_id": "system", 00:07:35.859 "dma_device_type": 1 00:07:35.859 }, 00:07:35.859 { 00:07:35.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.859 "dma_device_type": 2 00:07:35.859 }, 00:07:35.859 { 00:07:35.859 "dma_device_id": "system", 00:07:35.859 "dma_device_type": 1 00:07:35.859 }, 00:07:35.859 { 00:07:35.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.859 "dma_device_type": 2 00:07:35.859 } 00:07:35.859 ], 00:07:35.859 "driver_specific": { 00:07:35.859 "raid": { 00:07:35.859 "uuid": "a2869cf7-fd01-447c-95f9-1f6a3ca81bdf", 00:07:35.859 "strip_size_kb": 64, 00:07:35.859 "state": "online", 00:07:35.859 "raid_level": "raid0", 00:07:35.859 "superblock": false, 00:07:35.859 "num_base_bdevs": 3, 00:07:35.859 "num_base_bdevs_discovered": 3, 00:07:35.859 "num_base_bdevs_operational": 3, 00:07:35.859 "base_bdevs_list": [ 00:07:35.859 { 00:07:35.859 "name": "NewBaseBdev", 00:07:35.859 "uuid": "286db80c-761c-49c1-8d1b-cfdb7127a5e7", 00:07:35.859 "is_configured": true, 00:07:35.859 "data_offset": 0, 00:07:35.860 "data_size": 65536 00:07:35.860 }, 00:07:35.860 { 00:07:35.860 "name": "BaseBdev2", 00:07:35.860 "uuid": "81a733e2-1fe9-4b63-8958-e53618072e61", 00:07:35.860 "is_configured": true, 00:07:35.860 "data_offset": 0, 00:07:35.860 "data_size": 65536 00:07:35.860 }, 00:07:35.860 { 00:07:35.860 "name": "BaseBdev3", 00:07:35.860 "uuid": "052bf1a6-07c5-4f0c-b2e1-04f8a2647732", 00:07:35.860 "is_configured": true, 00:07:35.860 "data_offset": 0, 00:07:35.860 "data_size": 65536 00:07:35.860 } 00:07:35.860 ] 00:07:35.860 } 00:07:35.860 } 00:07:35.860 }' 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:35.860 BaseBdev2 00:07:35.860 BaseBdev3' 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.860 11:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.860 [2024-12-06 11:50:33.614694] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:35.860 [2024-12-06 11:50:33.614761] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:35.860 [2024-12-06 11:50:33.614868] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:35.860 [2024-12-06 11:50:33.614934] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:35.860 [2024-12-06 11:50:33.614979] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:07:36.119 11:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.119 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74725 00:07:36.119 11:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 74725 ']' 00:07:36.119 11:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 74725 00:07:36.119 11:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:36.119 11:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:36.119 11:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74725 00:07:36.119 killing process with pid 74725 00:07:36.119 11:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:36.119 11:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:36.119 11:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74725' 00:07:36.119 11:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 74725 00:07:36.119 [2024-12-06 11:50:33.660725] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:36.119 11:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 74725 00:07:36.119 [2024-12-06 11:50:33.692536] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:36.380 11:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:36.380 00:07:36.380 real 0m8.573s 00:07:36.380 user 0m14.631s 00:07:36.380 sys 0m1.641s 00:07:36.380 11:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.380 11:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.380 ************************************ 00:07:36.380 END TEST raid_state_function_test 00:07:36.380 ************************************ 00:07:36.380 11:50:33 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:07:36.380 11:50:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:36.380 11:50:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.380 11:50:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:36.380 ************************************ 00:07:36.380 START TEST raid_state_function_test_sb 00:07:36.380 ************************************ 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:36.380 Process raid pid: 75330 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75330 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75330' 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75330 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75330 ']' 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.380 11:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.380 [2024-12-06 11:50:34.097298] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:36.380 [2024-12-06 11:50:34.097511] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.640 [2024-12-06 11:50:34.243743] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.640 [2024-12-06 11:50:34.290729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.640 [2024-12-06 11:50:34.334305] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.640 [2024-12-06 11:50:34.334400] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.210 11:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.210 11:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:37.210 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:37.210 11:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.210 11:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.210 [2024-12-06 11:50:34.920121] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:37.210 [2024-12-06 11:50:34.920221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:37.210 [2024-12-06 11:50:34.920266] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:37.210 [2024-12-06 11:50:34.920290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:37.210 [2024-12-06 11:50:34.920307] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:37.210 [2024-12-06 11:50:34.920330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:37.210 11:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.210 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:37.210 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.210 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.210 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.210 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.210 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:37.210 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.210 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.210 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.210 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.210 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.210 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.210 11:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.210 11:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.210 11:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.469 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.469 "name": "Existed_Raid", 00:07:37.469 "uuid": "a2bb7827-a54f-4620-b28f-7faf6126a4a1", 00:07:37.469 "strip_size_kb": 64, 00:07:37.469 "state": "configuring", 00:07:37.469 "raid_level": "raid0", 00:07:37.469 "superblock": true, 00:07:37.469 "num_base_bdevs": 3, 00:07:37.469 "num_base_bdevs_discovered": 0, 00:07:37.469 "num_base_bdevs_operational": 3, 00:07:37.469 "base_bdevs_list": [ 00:07:37.469 { 00:07:37.469 "name": "BaseBdev1", 00:07:37.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.469 "is_configured": false, 00:07:37.469 "data_offset": 0, 00:07:37.469 "data_size": 0 00:07:37.469 }, 00:07:37.469 { 00:07:37.469 "name": "BaseBdev2", 00:07:37.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.469 "is_configured": false, 00:07:37.469 "data_offset": 0, 00:07:37.469 "data_size": 0 00:07:37.469 }, 00:07:37.469 { 00:07:37.469 "name": "BaseBdev3", 00:07:37.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.469 "is_configured": false, 00:07:37.469 "data_offset": 0, 00:07:37.469 "data_size": 0 00:07:37.469 } 00:07:37.469 ] 00:07:37.469 }' 00:07:37.469 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.469 11:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.728 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:37.728 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.728 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.728 [2024-12-06 11:50:35.375281] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:37.728 [2024-12-06 11:50:35.375363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:37.728 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.728 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:37.728 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.728 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.728 [2024-12-06 11:50:35.387280] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:37.728 [2024-12-06 11:50:35.387317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:37.728 [2024-12-06 11:50:35.387326] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:37.728 [2024-12-06 11:50:35.387335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:37.728 [2024-12-06 11:50:35.387341] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:37.728 [2024-12-06 11:50:35.387350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:37.728 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.728 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:37.728 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.728 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.728 [2024-12-06 11:50:35.408086] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:37.728 BaseBdev1 00:07:37.728 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.728 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:37.728 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:37.728 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:37.728 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:37.728 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:37.728 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:37.728 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:37.728 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.728 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.728 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.728 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:37.728 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.728 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.728 [ 00:07:37.728 { 00:07:37.728 "name": "BaseBdev1", 00:07:37.728 "aliases": [ 00:07:37.728 "d7d9d854-8a9a-4bc4-b01e-0f717f47124a" 00:07:37.728 ], 00:07:37.728 "product_name": "Malloc disk", 00:07:37.728 "block_size": 512, 00:07:37.728 "num_blocks": 65536, 00:07:37.728 "uuid": "d7d9d854-8a9a-4bc4-b01e-0f717f47124a", 00:07:37.728 "assigned_rate_limits": { 00:07:37.728 "rw_ios_per_sec": 0, 00:07:37.728 "rw_mbytes_per_sec": 0, 00:07:37.728 "r_mbytes_per_sec": 0, 00:07:37.728 "w_mbytes_per_sec": 0 00:07:37.728 }, 00:07:37.728 "claimed": true, 00:07:37.728 "claim_type": "exclusive_write", 00:07:37.728 "zoned": false, 00:07:37.728 "supported_io_types": { 00:07:37.729 "read": true, 00:07:37.729 "write": true, 00:07:37.729 "unmap": true, 00:07:37.729 "flush": true, 00:07:37.729 "reset": true, 00:07:37.729 "nvme_admin": false, 00:07:37.729 "nvme_io": false, 00:07:37.729 "nvme_io_md": false, 00:07:37.729 "write_zeroes": true, 00:07:37.729 "zcopy": true, 00:07:37.729 "get_zone_info": false, 00:07:37.729 "zone_management": false, 00:07:37.729 "zone_append": false, 00:07:37.729 "compare": false, 00:07:37.729 "compare_and_write": false, 00:07:37.729 "abort": true, 00:07:37.729 "seek_hole": false, 00:07:37.729 "seek_data": false, 00:07:37.729 "copy": true, 00:07:37.729 "nvme_iov_md": false 00:07:37.729 }, 00:07:37.729 "memory_domains": [ 00:07:37.729 { 00:07:37.729 "dma_device_id": "system", 00:07:37.729 "dma_device_type": 1 00:07:37.729 }, 00:07:37.729 { 00:07:37.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.729 "dma_device_type": 2 00:07:37.729 } 00:07:37.729 ], 00:07:37.729 "driver_specific": {} 00:07:37.729 } 00:07:37.729 ] 00:07:37.729 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.729 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:37.729 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:37.729 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.729 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.729 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.729 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.729 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:37.729 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.729 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.729 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.729 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.729 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.729 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.729 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.729 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.729 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.988 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.988 "name": "Existed_Raid", 00:07:37.988 "uuid": "c404b232-866f-49c7-ac16-9e9cc1ed0aef", 00:07:37.988 "strip_size_kb": 64, 00:07:37.989 "state": "configuring", 00:07:37.989 "raid_level": "raid0", 00:07:37.989 "superblock": true, 00:07:37.989 "num_base_bdevs": 3, 00:07:37.989 "num_base_bdevs_discovered": 1, 00:07:37.989 "num_base_bdevs_operational": 3, 00:07:37.989 "base_bdevs_list": [ 00:07:37.989 { 00:07:37.989 "name": "BaseBdev1", 00:07:37.989 "uuid": "d7d9d854-8a9a-4bc4-b01e-0f717f47124a", 00:07:37.989 "is_configured": true, 00:07:37.989 "data_offset": 2048, 00:07:37.989 "data_size": 63488 00:07:37.989 }, 00:07:37.989 { 00:07:37.989 "name": "BaseBdev2", 00:07:37.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.989 "is_configured": false, 00:07:37.989 "data_offset": 0, 00:07:37.989 "data_size": 0 00:07:37.989 }, 00:07:37.989 { 00:07:37.989 "name": "BaseBdev3", 00:07:37.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.989 "is_configured": false, 00:07:37.989 "data_offset": 0, 00:07:37.989 "data_size": 0 00:07:37.989 } 00:07:37.989 ] 00:07:37.989 }' 00:07:37.989 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.989 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.248 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:38.248 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.248 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.248 [2024-12-06 11:50:35.819408] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:38.248 [2024-12-06 11:50:35.819459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:38.248 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.248 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:38.248 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.248 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.248 [2024-12-06 11:50:35.831441] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:38.248 [2024-12-06 11:50:35.833226] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:38.248 [2024-12-06 11:50:35.833278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:38.248 [2024-12-06 11:50:35.833287] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:38.248 [2024-12-06 11:50:35.833297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:38.248 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.248 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:38.248 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:38.248 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:38.248 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.248 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:38.248 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:38.248 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.248 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:38.249 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.249 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.249 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.249 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.249 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.249 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.249 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.249 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.249 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.249 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.249 "name": "Existed_Raid", 00:07:38.249 "uuid": "bfdaf725-a3b7-45d6-ac91-f44d786ebfc8", 00:07:38.249 "strip_size_kb": 64, 00:07:38.249 "state": "configuring", 00:07:38.249 "raid_level": "raid0", 00:07:38.249 "superblock": true, 00:07:38.249 "num_base_bdevs": 3, 00:07:38.249 "num_base_bdevs_discovered": 1, 00:07:38.249 "num_base_bdevs_operational": 3, 00:07:38.249 "base_bdevs_list": [ 00:07:38.249 { 00:07:38.249 "name": "BaseBdev1", 00:07:38.249 "uuid": "d7d9d854-8a9a-4bc4-b01e-0f717f47124a", 00:07:38.249 "is_configured": true, 00:07:38.249 "data_offset": 2048, 00:07:38.249 "data_size": 63488 00:07:38.249 }, 00:07:38.249 { 00:07:38.249 "name": "BaseBdev2", 00:07:38.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.249 "is_configured": false, 00:07:38.249 "data_offset": 0, 00:07:38.249 "data_size": 0 00:07:38.249 }, 00:07:38.249 { 00:07:38.249 "name": "BaseBdev3", 00:07:38.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.249 "is_configured": false, 00:07:38.249 "data_offset": 0, 00:07:38.249 "data_size": 0 00:07:38.249 } 00:07:38.249 ] 00:07:38.249 }' 00:07:38.249 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.249 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.508 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:38.508 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.508 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.508 [2024-12-06 11:50:36.261573] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:38.508 BaseBdev2 00:07:38.508 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.508 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:38.508 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:38.508 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.768 [ 00:07:38.768 { 00:07:38.768 "name": "BaseBdev2", 00:07:38.768 "aliases": [ 00:07:38.768 "9262a98e-1233-4c62-b752-bff8b4439a80" 00:07:38.768 ], 00:07:38.768 "product_name": "Malloc disk", 00:07:38.768 "block_size": 512, 00:07:38.768 "num_blocks": 65536, 00:07:38.768 "uuid": "9262a98e-1233-4c62-b752-bff8b4439a80", 00:07:38.768 "assigned_rate_limits": { 00:07:38.768 "rw_ios_per_sec": 0, 00:07:38.768 "rw_mbytes_per_sec": 0, 00:07:38.768 "r_mbytes_per_sec": 0, 00:07:38.768 "w_mbytes_per_sec": 0 00:07:38.768 }, 00:07:38.768 "claimed": true, 00:07:38.768 "claim_type": "exclusive_write", 00:07:38.768 "zoned": false, 00:07:38.768 "supported_io_types": { 00:07:38.768 "read": true, 00:07:38.768 "write": true, 00:07:38.768 "unmap": true, 00:07:38.768 "flush": true, 00:07:38.768 "reset": true, 00:07:38.768 "nvme_admin": false, 00:07:38.768 "nvme_io": false, 00:07:38.768 "nvme_io_md": false, 00:07:38.768 "write_zeroes": true, 00:07:38.768 "zcopy": true, 00:07:38.768 "get_zone_info": false, 00:07:38.768 "zone_management": false, 00:07:38.768 "zone_append": false, 00:07:38.768 "compare": false, 00:07:38.768 "compare_and_write": false, 00:07:38.768 "abort": true, 00:07:38.768 "seek_hole": false, 00:07:38.768 "seek_data": false, 00:07:38.768 "copy": true, 00:07:38.768 "nvme_iov_md": false 00:07:38.768 }, 00:07:38.768 "memory_domains": [ 00:07:38.768 { 00:07:38.768 "dma_device_id": "system", 00:07:38.768 "dma_device_type": 1 00:07:38.768 }, 00:07:38.768 { 00:07:38.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.768 "dma_device_type": 2 00:07:38.768 } 00:07:38.768 ], 00:07:38.768 "driver_specific": {} 00:07:38.768 } 00:07:38.768 ] 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.768 "name": "Existed_Raid", 00:07:38.768 "uuid": "bfdaf725-a3b7-45d6-ac91-f44d786ebfc8", 00:07:38.768 "strip_size_kb": 64, 00:07:38.768 "state": "configuring", 00:07:38.768 "raid_level": "raid0", 00:07:38.768 "superblock": true, 00:07:38.768 "num_base_bdevs": 3, 00:07:38.768 "num_base_bdevs_discovered": 2, 00:07:38.768 "num_base_bdevs_operational": 3, 00:07:38.768 "base_bdevs_list": [ 00:07:38.768 { 00:07:38.768 "name": "BaseBdev1", 00:07:38.768 "uuid": "d7d9d854-8a9a-4bc4-b01e-0f717f47124a", 00:07:38.768 "is_configured": true, 00:07:38.768 "data_offset": 2048, 00:07:38.768 "data_size": 63488 00:07:38.768 }, 00:07:38.768 { 00:07:38.768 "name": "BaseBdev2", 00:07:38.768 "uuid": "9262a98e-1233-4c62-b752-bff8b4439a80", 00:07:38.768 "is_configured": true, 00:07:38.768 "data_offset": 2048, 00:07:38.768 "data_size": 63488 00:07:38.768 }, 00:07:38.768 { 00:07:38.768 "name": "BaseBdev3", 00:07:38.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.768 "is_configured": false, 00:07:38.768 "data_offset": 0, 00:07:38.768 "data_size": 0 00:07:38.768 } 00:07:38.768 ] 00:07:38.768 }' 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.768 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.027 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:39.027 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.027 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.027 BaseBdev3 00:07:39.027 [2024-12-06 11:50:36.763798] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:39.027 [2024-12-06 11:50:36.763983] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:39.027 [2024-12-06 11:50:36.764007] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:39.027 [2024-12-06 11:50:36.764293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:39.027 [2024-12-06 11:50:36.764441] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:39.027 [2024-12-06 11:50:36.764450] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:39.027 [2024-12-06 11:50:36.764560] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.027 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.027 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:39.027 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:39.027 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:39.027 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:39.027 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:39.027 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:39.027 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:39.027 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.027 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.027 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.027 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:39.027 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.027 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.302 [ 00:07:39.302 { 00:07:39.302 "name": "BaseBdev3", 00:07:39.302 "aliases": [ 00:07:39.302 "52abcc6b-723b-4e30-8931-55c02e4a93a5" 00:07:39.302 ], 00:07:39.302 "product_name": "Malloc disk", 00:07:39.302 "block_size": 512, 00:07:39.302 "num_blocks": 65536, 00:07:39.302 "uuid": "52abcc6b-723b-4e30-8931-55c02e4a93a5", 00:07:39.302 "assigned_rate_limits": { 00:07:39.302 "rw_ios_per_sec": 0, 00:07:39.302 "rw_mbytes_per_sec": 0, 00:07:39.302 "r_mbytes_per_sec": 0, 00:07:39.302 "w_mbytes_per_sec": 0 00:07:39.302 }, 00:07:39.302 "claimed": true, 00:07:39.303 "claim_type": "exclusive_write", 00:07:39.303 "zoned": false, 00:07:39.303 "supported_io_types": { 00:07:39.303 "read": true, 00:07:39.303 "write": true, 00:07:39.303 "unmap": true, 00:07:39.303 "flush": true, 00:07:39.303 "reset": true, 00:07:39.303 "nvme_admin": false, 00:07:39.303 "nvme_io": false, 00:07:39.303 "nvme_io_md": false, 00:07:39.303 "write_zeroes": true, 00:07:39.303 "zcopy": true, 00:07:39.303 "get_zone_info": false, 00:07:39.303 "zone_management": false, 00:07:39.303 "zone_append": false, 00:07:39.303 "compare": false, 00:07:39.303 "compare_and_write": false, 00:07:39.303 "abort": true, 00:07:39.303 "seek_hole": false, 00:07:39.303 "seek_data": false, 00:07:39.303 "copy": true, 00:07:39.303 "nvme_iov_md": false 00:07:39.303 }, 00:07:39.303 "memory_domains": [ 00:07:39.303 { 00:07:39.303 "dma_device_id": "system", 00:07:39.303 "dma_device_type": 1 00:07:39.303 }, 00:07:39.303 { 00:07:39.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.303 "dma_device_type": 2 00:07:39.303 } 00:07:39.303 ], 00:07:39.303 "driver_specific": {} 00:07:39.303 } 00:07:39.303 ] 00:07:39.303 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.303 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:39.303 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:39.303 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:39.303 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:39.303 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.303 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.303 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:39.303 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.303 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:39.303 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.303 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.303 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.303 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.303 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.303 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.303 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.303 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.303 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.303 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.303 "name": "Existed_Raid", 00:07:39.303 "uuid": "bfdaf725-a3b7-45d6-ac91-f44d786ebfc8", 00:07:39.303 "strip_size_kb": 64, 00:07:39.303 "state": "online", 00:07:39.303 "raid_level": "raid0", 00:07:39.303 "superblock": true, 00:07:39.303 "num_base_bdevs": 3, 00:07:39.303 "num_base_bdevs_discovered": 3, 00:07:39.303 "num_base_bdevs_operational": 3, 00:07:39.303 "base_bdevs_list": [ 00:07:39.303 { 00:07:39.303 "name": "BaseBdev1", 00:07:39.303 "uuid": "d7d9d854-8a9a-4bc4-b01e-0f717f47124a", 00:07:39.303 "is_configured": true, 00:07:39.303 "data_offset": 2048, 00:07:39.303 "data_size": 63488 00:07:39.303 }, 00:07:39.303 { 00:07:39.303 "name": "BaseBdev2", 00:07:39.303 "uuid": "9262a98e-1233-4c62-b752-bff8b4439a80", 00:07:39.303 "is_configured": true, 00:07:39.303 "data_offset": 2048, 00:07:39.303 "data_size": 63488 00:07:39.303 }, 00:07:39.303 { 00:07:39.303 "name": "BaseBdev3", 00:07:39.303 "uuid": "52abcc6b-723b-4e30-8931-55c02e4a93a5", 00:07:39.303 "is_configured": true, 00:07:39.303 "data_offset": 2048, 00:07:39.303 "data_size": 63488 00:07:39.303 } 00:07:39.303 ] 00:07:39.303 }' 00:07:39.303 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.303 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.563 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:39.563 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:39.563 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:39.563 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:39.563 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:39.563 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:39.563 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:39.563 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:39.563 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.563 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.563 [2024-12-06 11:50:37.183573] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:39.563 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.563 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:39.563 "name": "Existed_Raid", 00:07:39.563 "aliases": [ 00:07:39.563 "bfdaf725-a3b7-45d6-ac91-f44d786ebfc8" 00:07:39.563 ], 00:07:39.563 "product_name": "Raid Volume", 00:07:39.563 "block_size": 512, 00:07:39.563 "num_blocks": 190464, 00:07:39.563 "uuid": "bfdaf725-a3b7-45d6-ac91-f44d786ebfc8", 00:07:39.563 "assigned_rate_limits": { 00:07:39.563 "rw_ios_per_sec": 0, 00:07:39.563 "rw_mbytes_per_sec": 0, 00:07:39.563 "r_mbytes_per_sec": 0, 00:07:39.563 "w_mbytes_per_sec": 0 00:07:39.563 }, 00:07:39.563 "claimed": false, 00:07:39.563 "zoned": false, 00:07:39.563 "supported_io_types": { 00:07:39.563 "read": true, 00:07:39.563 "write": true, 00:07:39.563 "unmap": true, 00:07:39.563 "flush": true, 00:07:39.563 "reset": true, 00:07:39.563 "nvme_admin": false, 00:07:39.563 "nvme_io": false, 00:07:39.563 "nvme_io_md": false, 00:07:39.563 "write_zeroes": true, 00:07:39.563 "zcopy": false, 00:07:39.563 "get_zone_info": false, 00:07:39.563 "zone_management": false, 00:07:39.563 "zone_append": false, 00:07:39.563 "compare": false, 00:07:39.563 "compare_and_write": false, 00:07:39.563 "abort": false, 00:07:39.563 "seek_hole": false, 00:07:39.563 "seek_data": false, 00:07:39.563 "copy": false, 00:07:39.563 "nvme_iov_md": false 00:07:39.563 }, 00:07:39.563 "memory_domains": [ 00:07:39.563 { 00:07:39.563 "dma_device_id": "system", 00:07:39.563 "dma_device_type": 1 00:07:39.563 }, 00:07:39.563 { 00:07:39.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.564 "dma_device_type": 2 00:07:39.564 }, 00:07:39.564 { 00:07:39.564 "dma_device_id": "system", 00:07:39.564 "dma_device_type": 1 00:07:39.564 }, 00:07:39.564 { 00:07:39.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.564 "dma_device_type": 2 00:07:39.564 }, 00:07:39.564 { 00:07:39.564 "dma_device_id": "system", 00:07:39.564 "dma_device_type": 1 00:07:39.564 }, 00:07:39.564 { 00:07:39.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.564 "dma_device_type": 2 00:07:39.564 } 00:07:39.564 ], 00:07:39.564 "driver_specific": { 00:07:39.564 "raid": { 00:07:39.564 "uuid": "bfdaf725-a3b7-45d6-ac91-f44d786ebfc8", 00:07:39.564 "strip_size_kb": 64, 00:07:39.564 "state": "online", 00:07:39.564 "raid_level": "raid0", 00:07:39.564 "superblock": true, 00:07:39.564 "num_base_bdevs": 3, 00:07:39.564 "num_base_bdevs_discovered": 3, 00:07:39.564 "num_base_bdevs_operational": 3, 00:07:39.564 "base_bdevs_list": [ 00:07:39.564 { 00:07:39.564 "name": "BaseBdev1", 00:07:39.564 "uuid": "d7d9d854-8a9a-4bc4-b01e-0f717f47124a", 00:07:39.564 "is_configured": true, 00:07:39.564 "data_offset": 2048, 00:07:39.564 "data_size": 63488 00:07:39.564 }, 00:07:39.564 { 00:07:39.564 "name": "BaseBdev2", 00:07:39.564 "uuid": "9262a98e-1233-4c62-b752-bff8b4439a80", 00:07:39.564 "is_configured": true, 00:07:39.564 "data_offset": 2048, 00:07:39.564 "data_size": 63488 00:07:39.564 }, 00:07:39.564 { 00:07:39.564 "name": "BaseBdev3", 00:07:39.564 "uuid": "52abcc6b-723b-4e30-8931-55c02e4a93a5", 00:07:39.564 "is_configured": true, 00:07:39.564 "data_offset": 2048, 00:07:39.564 "data_size": 63488 00:07:39.564 } 00:07:39.564 ] 00:07:39.564 } 00:07:39.564 } 00:07:39.564 }' 00:07:39.564 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:39.564 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:39.564 BaseBdev2 00:07:39.564 BaseBdev3' 00:07:39.564 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.564 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:39.564 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:39.823 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.823 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:39.823 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.823 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.823 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.823 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.823 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.823 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:39.823 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.823 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:39.823 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.823 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.823 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.823 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.823 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.823 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:39.823 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:39.823 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.823 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.823 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.823 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.823 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.823 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.823 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:39.823 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.823 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.823 [2024-12-06 11:50:37.431347] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:39.823 [2024-12-06 11:50:37.431373] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:39.823 [2024-12-06 11:50:37.431423] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:39.824 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.824 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:39.824 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:39.824 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:39.824 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:39.824 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:39.824 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:07:39.824 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.824 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:39.824 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:39.824 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.824 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.824 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.824 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.824 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.824 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.824 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.824 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.824 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.824 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.824 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.824 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.824 "name": "Existed_Raid", 00:07:39.824 "uuid": "bfdaf725-a3b7-45d6-ac91-f44d786ebfc8", 00:07:39.824 "strip_size_kb": 64, 00:07:39.824 "state": "offline", 00:07:39.824 "raid_level": "raid0", 00:07:39.824 "superblock": true, 00:07:39.824 "num_base_bdevs": 3, 00:07:39.824 "num_base_bdevs_discovered": 2, 00:07:39.824 "num_base_bdevs_operational": 2, 00:07:39.824 "base_bdevs_list": [ 00:07:39.824 { 00:07:39.824 "name": null, 00:07:39.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.824 "is_configured": false, 00:07:39.824 "data_offset": 0, 00:07:39.824 "data_size": 63488 00:07:39.824 }, 00:07:39.824 { 00:07:39.824 "name": "BaseBdev2", 00:07:39.824 "uuid": "9262a98e-1233-4c62-b752-bff8b4439a80", 00:07:39.824 "is_configured": true, 00:07:39.824 "data_offset": 2048, 00:07:39.824 "data_size": 63488 00:07:39.824 }, 00:07:39.824 { 00:07:39.824 "name": "BaseBdev3", 00:07:39.824 "uuid": "52abcc6b-723b-4e30-8931-55c02e4a93a5", 00:07:39.824 "is_configured": true, 00:07:39.824 "data_offset": 2048, 00:07:39.824 "data_size": 63488 00:07:39.824 } 00:07:39.824 ] 00:07:39.824 }' 00:07:39.824 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.824 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.394 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:40.394 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:40.394 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.394 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.394 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.394 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:40.394 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.394 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:40.394 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:40.394 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:40.394 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.394 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.394 [2024-12-06 11:50:37.925807] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:40.394 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.394 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:40.394 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:40.394 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.394 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:40.394 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.394 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.394 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.394 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:40.394 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:40.394 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:40.394 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.394 11:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.394 [2024-12-06 11:50:37.996943] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:40.394 [2024-12-06 11:50:37.997033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.394 BaseBdev2 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.394 [ 00:07:40.394 { 00:07:40.394 "name": "BaseBdev2", 00:07:40.394 "aliases": [ 00:07:40.394 "05f0bb93-6407-4862-87e0-435761657f35" 00:07:40.394 ], 00:07:40.394 "product_name": "Malloc disk", 00:07:40.394 "block_size": 512, 00:07:40.394 "num_blocks": 65536, 00:07:40.394 "uuid": "05f0bb93-6407-4862-87e0-435761657f35", 00:07:40.394 "assigned_rate_limits": { 00:07:40.394 "rw_ios_per_sec": 0, 00:07:40.394 "rw_mbytes_per_sec": 0, 00:07:40.394 "r_mbytes_per_sec": 0, 00:07:40.394 "w_mbytes_per_sec": 0 00:07:40.394 }, 00:07:40.394 "claimed": false, 00:07:40.394 "zoned": false, 00:07:40.394 "supported_io_types": { 00:07:40.394 "read": true, 00:07:40.394 "write": true, 00:07:40.394 "unmap": true, 00:07:40.394 "flush": true, 00:07:40.394 "reset": true, 00:07:40.394 "nvme_admin": false, 00:07:40.394 "nvme_io": false, 00:07:40.394 "nvme_io_md": false, 00:07:40.394 "write_zeroes": true, 00:07:40.394 "zcopy": true, 00:07:40.394 "get_zone_info": false, 00:07:40.394 "zone_management": false, 00:07:40.394 "zone_append": false, 00:07:40.394 "compare": false, 00:07:40.394 "compare_and_write": false, 00:07:40.394 "abort": true, 00:07:40.394 "seek_hole": false, 00:07:40.394 "seek_data": false, 00:07:40.394 "copy": true, 00:07:40.394 "nvme_iov_md": false 00:07:40.394 }, 00:07:40.394 "memory_domains": [ 00:07:40.394 { 00:07:40.394 "dma_device_id": "system", 00:07:40.394 "dma_device_type": 1 00:07:40.394 }, 00:07:40.394 { 00:07:40.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.394 "dma_device_type": 2 00:07:40.394 } 00:07:40.394 ], 00:07:40.394 "driver_specific": {} 00:07:40.394 } 00:07:40.394 ] 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.394 BaseBdev3 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:40.394 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:40.395 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:40.395 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.395 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.395 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.395 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:40.395 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.395 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.655 [ 00:07:40.655 { 00:07:40.655 "name": "BaseBdev3", 00:07:40.655 "aliases": [ 00:07:40.655 "d2413cf7-97b6-43c4-b2c1-b6606eab62cc" 00:07:40.655 ], 00:07:40.655 "product_name": "Malloc disk", 00:07:40.655 "block_size": 512, 00:07:40.655 "num_blocks": 65536, 00:07:40.655 "uuid": "d2413cf7-97b6-43c4-b2c1-b6606eab62cc", 00:07:40.655 "assigned_rate_limits": { 00:07:40.655 "rw_ios_per_sec": 0, 00:07:40.655 "rw_mbytes_per_sec": 0, 00:07:40.655 "r_mbytes_per_sec": 0, 00:07:40.655 "w_mbytes_per_sec": 0 00:07:40.655 }, 00:07:40.655 "claimed": false, 00:07:40.655 "zoned": false, 00:07:40.655 "supported_io_types": { 00:07:40.655 "read": true, 00:07:40.655 "write": true, 00:07:40.655 "unmap": true, 00:07:40.655 "flush": true, 00:07:40.655 "reset": true, 00:07:40.655 "nvme_admin": false, 00:07:40.655 "nvme_io": false, 00:07:40.655 "nvme_io_md": false, 00:07:40.655 "write_zeroes": true, 00:07:40.655 "zcopy": true, 00:07:40.655 "get_zone_info": false, 00:07:40.655 "zone_management": false, 00:07:40.655 "zone_append": false, 00:07:40.655 "compare": false, 00:07:40.655 "compare_and_write": false, 00:07:40.655 "abort": true, 00:07:40.655 "seek_hole": false, 00:07:40.655 "seek_data": false, 00:07:40.655 "copy": true, 00:07:40.655 "nvme_iov_md": false 00:07:40.655 }, 00:07:40.655 "memory_domains": [ 00:07:40.655 { 00:07:40.655 "dma_device_id": "system", 00:07:40.655 "dma_device_type": 1 00:07:40.655 }, 00:07:40.655 { 00:07:40.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.655 "dma_device_type": 2 00:07:40.655 } 00:07:40.655 ], 00:07:40.655 "driver_specific": {} 00:07:40.655 } 00:07:40.655 ] 00:07:40.655 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.655 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:40.655 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:40.655 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:40.655 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:40.655 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.655 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.655 [2024-12-06 11:50:38.168003] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:40.655 [2024-12-06 11:50:38.168089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:40.655 [2024-12-06 11:50:38.168132] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:40.655 [2024-12-06 11:50:38.169872] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:40.655 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.655 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:40.655 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.655 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:40.655 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:40.655 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.655 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:40.655 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.655 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.655 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.655 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.655 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.655 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.655 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.655 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.655 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.655 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.655 "name": "Existed_Raid", 00:07:40.655 "uuid": "1536a910-f373-4e06-96bf-02f304c3e7e9", 00:07:40.655 "strip_size_kb": 64, 00:07:40.655 "state": "configuring", 00:07:40.655 "raid_level": "raid0", 00:07:40.655 "superblock": true, 00:07:40.655 "num_base_bdevs": 3, 00:07:40.655 "num_base_bdevs_discovered": 2, 00:07:40.655 "num_base_bdevs_operational": 3, 00:07:40.655 "base_bdevs_list": [ 00:07:40.655 { 00:07:40.655 "name": "BaseBdev1", 00:07:40.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.655 "is_configured": false, 00:07:40.655 "data_offset": 0, 00:07:40.655 "data_size": 0 00:07:40.655 }, 00:07:40.655 { 00:07:40.655 "name": "BaseBdev2", 00:07:40.655 "uuid": "05f0bb93-6407-4862-87e0-435761657f35", 00:07:40.655 "is_configured": true, 00:07:40.655 "data_offset": 2048, 00:07:40.655 "data_size": 63488 00:07:40.655 }, 00:07:40.655 { 00:07:40.655 "name": "BaseBdev3", 00:07:40.655 "uuid": "d2413cf7-97b6-43c4-b2c1-b6606eab62cc", 00:07:40.655 "is_configured": true, 00:07:40.655 "data_offset": 2048, 00:07:40.655 "data_size": 63488 00:07:40.655 } 00:07:40.655 ] 00:07:40.655 }' 00:07:40.655 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.655 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.923 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:40.923 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.923 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.923 [2024-12-06 11:50:38.559322] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:40.923 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.923 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:40.923 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.923 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:40.923 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:40.923 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.923 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:40.923 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.923 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.923 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.923 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.923 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.923 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.923 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.923 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.923 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.923 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.923 "name": "Existed_Raid", 00:07:40.923 "uuid": "1536a910-f373-4e06-96bf-02f304c3e7e9", 00:07:40.923 "strip_size_kb": 64, 00:07:40.923 "state": "configuring", 00:07:40.923 "raid_level": "raid0", 00:07:40.923 "superblock": true, 00:07:40.923 "num_base_bdevs": 3, 00:07:40.923 "num_base_bdevs_discovered": 1, 00:07:40.923 "num_base_bdevs_operational": 3, 00:07:40.923 "base_bdevs_list": [ 00:07:40.923 { 00:07:40.923 "name": "BaseBdev1", 00:07:40.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.923 "is_configured": false, 00:07:40.923 "data_offset": 0, 00:07:40.923 "data_size": 0 00:07:40.923 }, 00:07:40.923 { 00:07:40.923 "name": null, 00:07:40.923 "uuid": "05f0bb93-6407-4862-87e0-435761657f35", 00:07:40.923 "is_configured": false, 00:07:40.923 "data_offset": 0, 00:07:40.923 "data_size": 63488 00:07:40.923 }, 00:07:40.923 { 00:07:40.923 "name": "BaseBdev3", 00:07:40.923 "uuid": "d2413cf7-97b6-43c4-b2c1-b6606eab62cc", 00:07:40.923 "is_configured": true, 00:07:40.923 "data_offset": 2048, 00:07:40.923 "data_size": 63488 00:07:40.923 } 00:07:40.923 ] 00:07:40.923 }' 00:07:40.923 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.923 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.537 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:41.537 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.537 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.537 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.537 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.537 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:41.537 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:41.537 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.537 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.537 [2024-12-06 11:50:38.997662] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:41.537 BaseBdev1 00:07:41.537 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.537 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:41.537 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:41.537 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:41.537 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:41.537 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:41.537 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:41.537 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:41.537 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.537 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.537 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.537 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:41.537 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.537 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.537 [ 00:07:41.537 { 00:07:41.537 "name": "BaseBdev1", 00:07:41.537 "aliases": [ 00:07:41.537 "a328614f-29c9-437b-a49c-d8826e3a54c9" 00:07:41.537 ], 00:07:41.537 "product_name": "Malloc disk", 00:07:41.537 "block_size": 512, 00:07:41.537 "num_blocks": 65536, 00:07:41.537 "uuid": "a328614f-29c9-437b-a49c-d8826e3a54c9", 00:07:41.537 "assigned_rate_limits": { 00:07:41.537 "rw_ios_per_sec": 0, 00:07:41.537 "rw_mbytes_per_sec": 0, 00:07:41.537 "r_mbytes_per_sec": 0, 00:07:41.537 "w_mbytes_per_sec": 0 00:07:41.537 }, 00:07:41.537 "claimed": true, 00:07:41.537 "claim_type": "exclusive_write", 00:07:41.537 "zoned": false, 00:07:41.537 "supported_io_types": { 00:07:41.537 "read": true, 00:07:41.537 "write": true, 00:07:41.537 "unmap": true, 00:07:41.537 "flush": true, 00:07:41.537 "reset": true, 00:07:41.537 "nvme_admin": false, 00:07:41.537 "nvme_io": false, 00:07:41.537 "nvme_io_md": false, 00:07:41.537 "write_zeroes": true, 00:07:41.537 "zcopy": true, 00:07:41.537 "get_zone_info": false, 00:07:41.537 "zone_management": false, 00:07:41.537 "zone_append": false, 00:07:41.537 "compare": false, 00:07:41.537 "compare_and_write": false, 00:07:41.537 "abort": true, 00:07:41.537 "seek_hole": false, 00:07:41.537 "seek_data": false, 00:07:41.537 "copy": true, 00:07:41.537 "nvme_iov_md": false 00:07:41.537 }, 00:07:41.537 "memory_domains": [ 00:07:41.537 { 00:07:41.537 "dma_device_id": "system", 00:07:41.537 "dma_device_type": 1 00:07:41.537 }, 00:07:41.537 { 00:07:41.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.537 "dma_device_type": 2 00:07:41.537 } 00:07:41.537 ], 00:07:41.537 "driver_specific": {} 00:07:41.537 } 00:07:41.537 ] 00:07:41.537 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.537 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:41.537 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:41.537 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.537 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.537 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:41.537 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.537 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:41.537 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.537 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.537 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.537 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.537 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.537 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.537 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.537 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.537 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.537 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.537 "name": "Existed_Raid", 00:07:41.537 "uuid": "1536a910-f373-4e06-96bf-02f304c3e7e9", 00:07:41.537 "strip_size_kb": 64, 00:07:41.537 "state": "configuring", 00:07:41.537 "raid_level": "raid0", 00:07:41.537 "superblock": true, 00:07:41.537 "num_base_bdevs": 3, 00:07:41.537 "num_base_bdevs_discovered": 2, 00:07:41.537 "num_base_bdevs_operational": 3, 00:07:41.537 "base_bdevs_list": [ 00:07:41.537 { 00:07:41.537 "name": "BaseBdev1", 00:07:41.537 "uuid": "a328614f-29c9-437b-a49c-d8826e3a54c9", 00:07:41.537 "is_configured": true, 00:07:41.537 "data_offset": 2048, 00:07:41.537 "data_size": 63488 00:07:41.537 }, 00:07:41.537 { 00:07:41.537 "name": null, 00:07:41.537 "uuid": "05f0bb93-6407-4862-87e0-435761657f35", 00:07:41.537 "is_configured": false, 00:07:41.537 "data_offset": 0, 00:07:41.537 "data_size": 63488 00:07:41.537 }, 00:07:41.537 { 00:07:41.537 "name": "BaseBdev3", 00:07:41.537 "uuid": "d2413cf7-97b6-43c4-b2c1-b6606eab62cc", 00:07:41.537 "is_configured": true, 00:07:41.537 "data_offset": 2048, 00:07:41.537 "data_size": 63488 00:07:41.537 } 00:07:41.537 ] 00:07:41.537 }' 00:07:41.537 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.538 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.812 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.812 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:41.812 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.812 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.812 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.812 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:41.812 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:41.812 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.812 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.812 [2024-12-06 11:50:39.536743] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:41.812 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.812 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:41.812 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.812 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.812 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:41.812 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.812 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:41.812 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.812 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.812 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.812 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.812 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.812 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.812 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.812 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.812 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.070 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.070 "name": "Existed_Raid", 00:07:42.070 "uuid": "1536a910-f373-4e06-96bf-02f304c3e7e9", 00:07:42.070 "strip_size_kb": 64, 00:07:42.070 "state": "configuring", 00:07:42.070 "raid_level": "raid0", 00:07:42.070 "superblock": true, 00:07:42.070 "num_base_bdevs": 3, 00:07:42.070 "num_base_bdevs_discovered": 1, 00:07:42.070 "num_base_bdevs_operational": 3, 00:07:42.070 "base_bdevs_list": [ 00:07:42.070 { 00:07:42.070 "name": "BaseBdev1", 00:07:42.070 "uuid": "a328614f-29c9-437b-a49c-d8826e3a54c9", 00:07:42.070 "is_configured": true, 00:07:42.070 "data_offset": 2048, 00:07:42.070 "data_size": 63488 00:07:42.070 }, 00:07:42.070 { 00:07:42.070 "name": null, 00:07:42.070 "uuid": "05f0bb93-6407-4862-87e0-435761657f35", 00:07:42.070 "is_configured": false, 00:07:42.070 "data_offset": 0, 00:07:42.070 "data_size": 63488 00:07:42.070 }, 00:07:42.070 { 00:07:42.070 "name": null, 00:07:42.070 "uuid": "d2413cf7-97b6-43c4-b2c1-b6606eab62cc", 00:07:42.070 "is_configured": false, 00:07:42.070 "data_offset": 0, 00:07:42.070 "data_size": 63488 00:07:42.070 } 00:07:42.070 ] 00:07:42.070 }' 00:07:42.070 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.070 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.328 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.329 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:42.329 11:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.329 11:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.329 11:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.329 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:42.329 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:42.329 11:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.329 11:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.329 [2024-12-06 11:50:40.063843] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:42.329 11:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.329 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:42.329 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.329 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.329 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:42.329 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.329 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:42.329 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.329 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.329 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.329 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.329 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.329 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.329 11:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.329 11:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.586 11:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.586 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.587 "name": "Existed_Raid", 00:07:42.587 "uuid": "1536a910-f373-4e06-96bf-02f304c3e7e9", 00:07:42.587 "strip_size_kb": 64, 00:07:42.587 "state": "configuring", 00:07:42.587 "raid_level": "raid0", 00:07:42.587 "superblock": true, 00:07:42.587 "num_base_bdevs": 3, 00:07:42.587 "num_base_bdevs_discovered": 2, 00:07:42.587 "num_base_bdevs_operational": 3, 00:07:42.587 "base_bdevs_list": [ 00:07:42.587 { 00:07:42.587 "name": "BaseBdev1", 00:07:42.587 "uuid": "a328614f-29c9-437b-a49c-d8826e3a54c9", 00:07:42.587 "is_configured": true, 00:07:42.587 "data_offset": 2048, 00:07:42.587 "data_size": 63488 00:07:42.587 }, 00:07:42.587 { 00:07:42.587 "name": null, 00:07:42.587 "uuid": "05f0bb93-6407-4862-87e0-435761657f35", 00:07:42.587 "is_configured": false, 00:07:42.587 "data_offset": 0, 00:07:42.587 "data_size": 63488 00:07:42.587 }, 00:07:42.587 { 00:07:42.587 "name": "BaseBdev3", 00:07:42.587 "uuid": "d2413cf7-97b6-43c4-b2c1-b6606eab62cc", 00:07:42.587 "is_configured": true, 00:07:42.587 "data_offset": 2048, 00:07:42.587 "data_size": 63488 00:07:42.587 } 00:07:42.587 ] 00:07:42.587 }' 00:07:42.587 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.587 11:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.845 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.845 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:42.845 11:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.845 11:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.845 11:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.845 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:42.845 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:42.845 11:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.845 11:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.845 [2024-12-06 11:50:40.579091] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:42.845 11:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.845 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:42.845 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.845 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.845 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:42.845 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.845 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:42.845 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.845 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.845 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.845 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.845 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.845 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.845 11:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.845 11:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.104 11:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.104 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.104 "name": "Existed_Raid", 00:07:43.104 "uuid": "1536a910-f373-4e06-96bf-02f304c3e7e9", 00:07:43.104 "strip_size_kb": 64, 00:07:43.104 "state": "configuring", 00:07:43.104 "raid_level": "raid0", 00:07:43.104 "superblock": true, 00:07:43.104 "num_base_bdevs": 3, 00:07:43.104 "num_base_bdevs_discovered": 1, 00:07:43.104 "num_base_bdevs_operational": 3, 00:07:43.104 "base_bdevs_list": [ 00:07:43.104 { 00:07:43.104 "name": null, 00:07:43.104 "uuid": "a328614f-29c9-437b-a49c-d8826e3a54c9", 00:07:43.104 "is_configured": false, 00:07:43.104 "data_offset": 0, 00:07:43.104 "data_size": 63488 00:07:43.105 }, 00:07:43.105 { 00:07:43.105 "name": null, 00:07:43.105 "uuid": "05f0bb93-6407-4862-87e0-435761657f35", 00:07:43.105 "is_configured": false, 00:07:43.105 "data_offset": 0, 00:07:43.105 "data_size": 63488 00:07:43.105 }, 00:07:43.105 { 00:07:43.105 "name": "BaseBdev3", 00:07:43.105 "uuid": "d2413cf7-97b6-43c4-b2c1-b6606eab62cc", 00:07:43.105 "is_configured": true, 00:07:43.105 "data_offset": 2048, 00:07:43.105 "data_size": 63488 00:07:43.105 } 00:07:43.105 ] 00:07:43.105 }' 00:07:43.105 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.105 11:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.363 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.363 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:43.363 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.363 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.363 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.363 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:43.363 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:43.363 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.363 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.363 [2024-12-06 11:50:41.056924] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:43.363 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.363 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:43.363 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.363 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:43.363 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:43.363 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.363 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:43.363 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.363 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.363 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.363 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.363 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.363 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.363 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.363 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.363 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.363 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.363 "name": "Existed_Raid", 00:07:43.363 "uuid": "1536a910-f373-4e06-96bf-02f304c3e7e9", 00:07:43.363 "strip_size_kb": 64, 00:07:43.363 "state": "configuring", 00:07:43.363 "raid_level": "raid0", 00:07:43.363 "superblock": true, 00:07:43.363 "num_base_bdevs": 3, 00:07:43.363 "num_base_bdevs_discovered": 2, 00:07:43.363 "num_base_bdevs_operational": 3, 00:07:43.363 "base_bdevs_list": [ 00:07:43.363 { 00:07:43.363 "name": null, 00:07:43.363 "uuid": "a328614f-29c9-437b-a49c-d8826e3a54c9", 00:07:43.363 "is_configured": false, 00:07:43.363 "data_offset": 0, 00:07:43.363 "data_size": 63488 00:07:43.363 }, 00:07:43.363 { 00:07:43.363 "name": "BaseBdev2", 00:07:43.363 "uuid": "05f0bb93-6407-4862-87e0-435761657f35", 00:07:43.363 "is_configured": true, 00:07:43.363 "data_offset": 2048, 00:07:43.363 "data_size": 63488 00:07:43.363 }, 00:07:43.363 { 00:07:43.363 "name": "BaseBdev3", 00:07:43.363 "uuid": "d2413cf7-97b6-43c4-b2c1-b6606eab62cc", 00:07:43.363 "is_configured": true, 00:07:43.363 "data_offset": 2048, 00:07:43.363 "data_size": 63488 00:07:43.363 } 00:07:43.363 ] 00:07:43.363 }' 00:07:43.363 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.363 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.932 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.932 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:43.932 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.932 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.932 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.932 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:43.932 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.932 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:43.932 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.932 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.932 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.932 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a328614f-29c9-437b-a49c-d8826e3a54c9 00:07:43.932 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.932 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.932 [2024-12-06 11:50:41.594974] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:43.932 [2024-12-06 11:50:41.595226] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:07:43.932 [2024-12-06 11:50:41.595296] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:43.932 [2024-12-06 11:50:41.595550] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:07:43.932 NewBaseBdev 00:07:43.932 [2024-12-06 11:50:41.595702] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:07:43.932 [2024-12-06 11:50:41.595718] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:07:43.932 [2024-12-06 11:50:41.595826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.932 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.932 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:43.932 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:07:43.932 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:43.932 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:43.932 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:43.932 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:43.932 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:43.932 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.932 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.932 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.933 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:43.933 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.933 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.933 [ 00:07:43.933 { 00:07:43.933 "name": "NewBaseBdev", 00:07:43.933 "aliases": [ 00:07:43.933 "a328614f-29c9-437b-a49c-d8826e3a54c9" 00:07:43.933 ], 00:07:43.933 "product_name": "Malloc disk", 00:07:43.933 "block_size": 512, 00:07:43.933 "num_blocks": 65536, 00:07:43.933 "uuid": "a328614f-29c9-437b-a49c-d8826e3a54c9", 00:07:43.933 "assigned_rate_limits": { 00:07:43.933 "rw_ios_per_sec": 0, 00:07:43.933 "rw_mbytes_per_sec": 0, 00:07:43.933 "r_mbytes_per_sec": 0, 00:07:43.933 "w_mbytes_per_sec": 0 00:07:43.933 }, 00:07:43.933 "claimed": true, 00:07:43.933 "claim_type": "exclusive_write", 00:07:43.933 "zoned": false, 00:07:43.933 "supported_io_types": { 00:07:43.933 "read": true, 00:07:43.933 "write": true, 00:07:43.933 "unmap": true, 00:07:43.933 "flush": true, 00:07:43.933 "reset": true, 00:07:43.933 "nvme_admin": false, 00:07:43.933 "nvme_io": false, 00:07:43.933 "nvme_io_md": false, 00:07:43.933 "write_zeroes": true, 00:07:43.933 "zcopy": true, 00:07:43.933 "get_zone_info": false, 00:07:43.933 "zone_management": false, 00:07:43.933 "zone_append": false, 00:07:43.933 "compare": false, 00:07:43.933 "compare_and_write": false, 00:07:43.933 "abort": true, 00:07:43.933 "seek_hole": false, 00:07:43.933 "seek_data": false, 00:07:43.933 "copy": true, 00:07:43.933 "nvme_iov_md": false 00:07:43.933 }, 00:07:43.933 "memory_domains": [ 00:07:43.933 { 00:07:43.933 "dma_device_id": "system", 00:07:43.933 "dma_device_type": 1 00:07:43.933 }, 00:07:43.933 { 00:07:43.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.933 "dma_device_type": 2 00:07:43.933 } 00:07:43.933 ], 00:07:43.933 "driver_specific": {} 00:07:43.933 } 00:07:43.933 ] 00:07:43.933 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.933 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:43.933 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:43.933 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.933 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:43.933 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:43.933 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.933 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:43.933 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.933 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.933 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.933 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.933 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.933 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.933 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.933 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.933 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.933 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.933 "name": "Existed_Raid", 00:07:43.933 "uuid": "1536a910-f373-4e06-96bf-02f304c3e7e9", 00:07:43.933 "strip_size_kb": 64, 00:07:43.933 "state": "online", 00:07:43.933 "raid_level": "raid0", 00:07:43.933 "superblock": true, 00:07:43.933 "num_base_bdevs": 3, 00:07:43.933 "num_base_bdevs_discovered": 3, 00:07:43.933 "num_base_bdevs_operational": 3, 00:07:43.933 "base_bdevs_list": [ 00:07:43.933 { 00:07:43.933 "name": "NewBaseBdev", 00:07:43.933 "uuid": "a328614f-29c9-437b-a49c-d8826e3a54c9", 00:07:43.933 "is_configured": true, 00:07:43.933 "data_offset": 2048, 00:07:43.933 "data_size": 63488 00:07:43.933 }, 00:07:43.933 { 00:07:43.933 "name": "BaseBdev2", 00:07:43.933 "uuid": "05f0bb93-6407-4862-87e0-435761657f35", 00:07:43.933 "is_configured": true, 00:07:43.933 "data_offset": 2048, 00:07:43.933 "data_size": 63488 00:07:43.933 }, 00:07:43.933 { 00:07:43.933 "name": "BaseBdev3", 00:07:43.933 "uuid": "d2413cf7-97b6-43c4-b2c1-b6606eab62cc", 00:07:43.933 "is_configured": true, 00:07:43.933 "data_offset": 2048, 00:07:43.933 "data_size": 63488 00:07:43.933 } 00:07:43.933 ] 00:07:43.933 }' 00:07:43.933 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.933 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.503 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:44.503 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:44.503 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:44.503 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:44.503 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:44.503 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:44.503 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:44.503 11:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.503 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:44.503 11:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.503 [2024-12-06 11:50:42.010612] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:44.503 11:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.503 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:44.503 "name": "Existed_Raid", 00:07:44.503 "aliases": [ 00:07:44.503 "1536a910-f373-4e06-96bf-02f304c3e7e9" 00:07:44.503 ], 00:07:44.503 "product_name": "Raid Volume", 00:07:44.503 "block_size": 512, 00:07:44.503 "num_blocks": 190464, 00:07:44.503 "uuid": "1536a910-f373-4e06-96bf-02f304c3e7e9", 00:07:44.503 "assigned_rate_limits": { 00:07:44.503 "rw_ios_per_sec": 0, 00:07:44.503 "rw_mbytes_per_sec": 0, 00:07:44.503 "r_mbytes_per_sec": 0, 00:07:44.503 "w_mbytes_per_sec": 0 00:07:44.503 }, 00:07:44.503 "claimed": false, 00:07:44.503 "zoned": false, 00:07:44.503 "supported_io_types": { 00:07:44.503 "read": true, 00:07:44.503 "write": true, 00:07:44.503 "unmap": true, 00:07:44.503 "flush": true, 00:07:44.503 "reset": true, 00:07:44.503 "nvme_admin": false, 00:07:44.503 "nvme_io": false, 00:07:44.503 "nvme_io_md": false, 00:07:44.503 "write_zeroes": true, 00:07:44.503 "zcopy": false, 00:07:44.503 "get_zone_info": false, 00:07:44.503 "zone_management": false, 00:07:44.503 "zone_append": false, 00:07:44.503 "compare": false, 00:07:44.503 "compare_and_write": false, 00:07:44.503 "abort": false, 00:07:44.503 "seek_hole": false, 00:07:44.503 "seek_data": false, 00:07:44.503 "copy": false, 00:07:44.503 "nvme_iov_md": false 00:07:44.503 }, 00:07:44.503 "memory_domains": [ 00:07:44.503 { 00:07:44.503 "dma_device_id": "system", 00:07:44.503 "dma_device_type": 1 00:07:44.503 }, 00:07:44.503 { 00:07:44.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.503 "dma_device_type": 2 00:07:44.503 }, 00:07:44.503 { 00:07:44.503 "dma_device_id": "system", 00:07:44.503 "dma_device_type": 1 00:07:44.503 }, 00:07:44.503 { 00:07:44.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.503 "dma_device_type": 2 00:07:44.503 }, 00:07:44.503 { 00:07:44.503 "dma_device_id": "system", 00:07:44.503 "dma_device_type": 1 00:07:44.503 }, 00:07:44.503 { 00:07:44.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.503 "dma_device_type": 2 00:07:44.503 } 00:07:44.503 ], 00:07:44.503 "driver_specific": { 00:07:44.503 "raid": { 00:07:44.503 "uuid": "1536a910-f373-4e06-96bf-02f304c3e7e9", 00:07:44.503 "strip_size_kb": 64, 00:07:44.503 "state": "online", 00:07:44.503 "raid_level": "raid0", 00:07:44.503 "superblock": true, 00:07:44.503 "num_base_bdevs": 3, 00:07:44.503 "num_base_bdevs_discovered": 3, 00:07:44.503 "num_base_bdevs_operational": 3, 00:07:44.503 "base_bdevs_list": [ 00:07:44.503 { 00:07:44.503 "name": "NewBaseBdev", 00:07:44.503 "uuid": "a328614f-29c9-437b-a49c-d8826e3a54c9", 00:07:44.503 "is_configured": true, 00:07:44.503 "data_offset": 2048, 00:07:44.503 "data_size": 63488 00:07:44.503 }, 00:07:44.503 { 00:07:44.503 "name": "BaseBdev2", 00:07:44.503 "uuid": "05f0bb93-6407-4862-87e0-435761657f35", 00:07:44.503 "is_configured": true, 00:07:44.503 "data_offset": 2048, 00:07:44.503 "data_size": 63488 00:07:44.503 }, 00:07:44.503 { 00:07:44.503 "name": "BaseBdev3", 00:07:44.503 "uuid": "d2413cf7-97b6-43c4-b2c1-b6606eab62cc", 00:07:44.503 "is_configured": true, 00:07:44.503 "data_offset": 2048, 00:07:44.503 "data_size": 63488 00:07:44.503 } 00:07:44.503 ] 00:07:44.503 } 00:07:44.503 } 00:07:44.503 }' 00:07:44.503 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:44.503 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:44.503 BaseBdev2 00:07:44.503 BaseBdev3' 00:07:44.503 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.503 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:44.503 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.503 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:44.503 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.503 11:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.503 11:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.503 11:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.503 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.503 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.503 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.503 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:44.503 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.503 11:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.503 11:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.503 11:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.503 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.503 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.503 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.503 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.503 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:44.503 11:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.504 11:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.504 11:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.504 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.504 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.504 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:44.504 11:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.504 11:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.504 [2024-12-06 11:50:42.257849] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:44.504 [2024-12-06 11:50:42.257908] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:44.504 [2024-12-06 11:50:42.258000] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:44.763 [2024-12-06 11:50:42.258081] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:44.763 [2024-12-06 11:50:42.258109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:07:44.763 11:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.763 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75330 00:07:44.763 11:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75330 ']' 00:07:44.763 11:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 75330 00:07:44.763 11:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:44.763 11:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:44.763 11:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75330 00:07:44.763 killing process with pid 75330 00:07:44.763 11:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:44.763 11:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:44.763 11:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75330' 00:07:44.763 11:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 75330 00:07:44.763 [2024-12-06 11:50:42.306571] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:44.763 11:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 75330 00:07:44.763 [2024-12-06 11:50:42.337692] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:45.023 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:45.023 00:07:45.023 real 0m8.570s 00:07:45.023 user 0m14.635s 00:07:45.023 sys 0m1.710s 00:07:45.023 ************************************ 00:07:45.023 END TEST raid_state_function_test_sb 00:07:45.023 ************************************ 00:07:45.023 11:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.023 11:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.023 11:50:42 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:07:45.023 11:50:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:45.023 11:50:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.023 11:50:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:45.023 ************************************ 00:07:45.023 START TEST raid_superblock_test 00:07:45.023 ************************************ 00:07:45.023 11:50:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:07:45.023 11:50:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:45.023 11:50:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:07:45.023 11:50:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:45.023 11:50:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:45.023 11:50:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:45.023 11:50:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:45.023 11:50:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:45.023 11:50:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:45.023 11:50:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:45.023 11:50:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:45.023 11:50:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:45.023 11:50:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:45.023 11:50:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:45.024 11:50:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:45.024 11:50:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:45.024 11:50:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:45.024 11:50:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=75928 00:07:45.024 11:50:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:45.024 11:50:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 75928 00:07:45.024 11:50:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 75928 ']' 00:07:45.024 11:50:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.024 11:50:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.024 11:50:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.024 11:50:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.024 11:50:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.024 [2024-12-06 11:50:42.734853] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:45.024 [2024-12-06 11:50:42.735058] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75928 ] 00:07:45.284 [2024-12-06 11:50:42.879832] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.284 [2024-12-06 11:50:42.923962] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.284 [2024-12-06 11:50:42.966261] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.284 [2024-12-06 11:50:42.966293] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.854 11:50:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.854 11:50:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:45.854 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:45.854 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:45.854 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:45.854 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:45.854 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:45.854 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:45.854 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:45.854 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:45.854 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:45.854 11:50:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.854 11:50:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.854 malloc1 00:07:45.854 11:50:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.854 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:45.854 11:50:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.854 11:50:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.854 [2024-12-06 11:50:43.588011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:45.854 [2024-12-06 11:50:43.588119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:45.854 [2024-12-06 11:50:43.588155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:45.854 [2024-12-06 11:50:43.588195] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:45.854 [2024-12-06 11:50:43.590274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:45.854 [2024-12-06 11:50:43.590359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:45.854 pt1 00:07:45.854 11:50:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.854 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:45.854 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:45.854 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:45.854 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:45.854 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:45.854 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:45.854 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:45.854 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:45.854 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:45.854 11:50:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.854 11:50:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.114 malloc2 00:07:46.114 11:50:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.114 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:46.114 11:50:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.114 11:50:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.114 [2024-12-06 11:50:43.635104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:46.114 [2024-12-06 11:50:43.635340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.114 [2024-12-06 11:50:43.635424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:46.114 [2024-12-06 11:50:43.635510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.114 [2024-12-06 11:50:43.640283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.114 [2024-12-06 11:50:43.640436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:46.114 pt2 00:07:46.114 11:50:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.114 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:46.114 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:46.114 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:07:46.114 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:07:46.114 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.115 malloc3 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.115 [2024-12-06 11:50:43.669837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:46.115 [2024-12-06 11:50:43.669940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.115 [2024-12-06 11:50:43.669972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:46.115 [2024-12-06 11:50:43.670001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.115 [2024-12-06 11:50:43.672021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.115 [2024-12-06 11:50:43.672097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:46.115 pt3 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.115 [2024-12-06 11:50:43.681897] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:46.115 [2024-12-06 11:50:43.683740] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:46.115 [2024-12-06 11:50:43.683834] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:46.115 [2024-12-06 11:50:43.684007] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:46.115 [2024-12-06 11:50:43.684055] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:46.115 [2024-12-06 11:50:43.684335] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:46.115 [2024-12-06 11:50:43.684514] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:46.115 [2024-12-06 11:50:43.684560] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:07:46.115 [2024-12-06 11:50:43.684703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.115 "name": "raid_bdev1", 00:07:46.115 "uuid": "7473a454-4691-47de-848a-671bb71a15d3", 00:07:46.115 "strip_size_kb": 64, 00:07:46.115 "state": "online", 00:07:46.115 "raid_level": "raid0", 00:07:46.115 "superblock": true, 00:07:46.115 "num_base_bdevs": 3, 00:07:46.115 "num_base_bdevs_discovered": 3, 00:07:46.115 "num_base_bdevs_operational": 3, 00:07:46.115 "base_bdevs_list": [ 00:07:46.115 { 00:07:46.115 "name": "pt1", 00:07:46.115 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:46.115 "is_configured": true, 00:07:46.115 "data_offset": 2048, 00:07:46.115 "data_size": 63488 00:07:46.115 }, 00:07:46.115 { 00:07:46.115 "name": "pt2", 00:07:46.115 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.115 "is_configured": true, 00:07:46.115 "data_offset": 2048, 00:07:46.115 "data_size": 63488 00:07:46.115 }, 00:07:46.115 { 00:07:46.115 "name": "pt3", 00:07:46.115 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:46.115 "is_configured": true, 00:07:46.115 "data_offset": 2048, 00:07:46.115 "data_size": 63488 00:07:46.115 } 00:07:46.115 ] 00:07:46.115 }' 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.115 11:50:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.375 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:46.375 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:46.375 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:46.375 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:46.375 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:46.375 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:46.375 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.635 [2024-12-06 11:50:44.137345] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:46.635 "name": "raid_bdev1", 00:07:46.635 "aliases": [ 00:07:46.635 "7473a454-4691-47de-848a-671bb71a15d3" 00:07:46.635 ], 00:07:46.635 "product_name": "Raid Volume", 00:07:46.635 "block_size": 512, 00:07:46.635 "num_blocks": 190464, 00:07:46.635 "uuid": "7473a454-4691-47de-848a-671bb71a15d3", 00:07:46.635 "assigned_rate_limits": { 00:07:46.635 "rw_ios_per_sec": 0, 00:07:46.635 "rw_mbytes_per_sec": 0, 00:07:46.635 "r_mbytes_per_sec": 0, 00:07:46.635 "w_mbytes_per_sec": 0 00:07:46.635 }, 00:07:46.635 "claimed": false, 00:07:46.635 "zoned": false, 00:07:46.635 "supported_io_types": { 00:07:46.635 "read": true, 00:07:46.635 "write": true, 00:07:46.635 "unmap": true, 00:07:46.635 "flush": true, 00:07:46.635 "reset": true, 00:07:46.635 "nvme_admin": false, 00:07:46.635 "nvme_io": false, 00:07:46.635 "nvme_io_md": false, 00:07:46.635 "write_zeroes": true, 00:07:46.635 "zcopy": false, 00:07:46.635 "get_zone_info": false, 00:07:46.635 "zone_management": false, 00:07:46.635 "zone_append": false, 00:07:46.635 "compare": false, 00:07:46.635 "compare_and_write": false, 00:07:46.635 "abort": false, 00:07:46.635 "seek_hole": false, 00:07:46.635 "seek_data": false, 00:07:46.635 "copy": false, 00:07:46.635 "nvme_iov_md": false 00:07:46.635 }, 00:07:46.635 "memory_domains": [ 00:07:46.635 { 00:07:46.635 "dma_device_id": "system", 00:07:46.635 "dma_device_type": 1 00:07:46.635 }, 00:07:46.635 { 00:07:46.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.635 "dma_device_type": 2 00:07:46.635 }, 00:07:46.635 { 00:07:46.635 "dma_device_id": "system", 00:07:46.635 "dma_device_type": 1 00:07:46.635 }, 00:07:46.635 { 00:07:46.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.635 "dma_device_type": 2 00:07:46.635 }, 00:07:46.635 { 00:07:46.635 "dma_device_id": "system", 00:07:46.635 "dma_device_type": 1 00:07:46.635 }, 00:07:46.635 { 00:07:46.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.635 "dma_device_type": 2 00:07:46.635 } 00:07:46.635 ], 00:07:46.635 "driver_specific": { 00:07:46.635 "raid": { 00:07:46.635 "uuid": "7473a454-4691-47de-848a-671bb71a15d3", 00:07:46.635 "strip_size_kb": 64, 00:07:46.635 "state": "online", 00:07:46.635 "raid_level": "raid0", 00:07:46.635 "superblock": true, 00:07:46.635 "num_base_bdevs": 3, 00:07:46.635 "num_base_bdevs_discovered": 3, 00:07:46.635 "num_base_bdevs_operational": 3, 00:07:46.635 "base_bdevs_list": [ 00:07:46.635 { 00:07:46.635 "name": "pt1", 00:07:46.635 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:46.635 "is_configured": true, 00:07:46.635 "data_offset": 2048, 00:07:46.635 "data_size": 63488 00:07:46.635 }, 00:07:46.635 { 00:07:46.635 "name": "pt2", 00:07:46.635 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.635 "is_configured": true, 00:07:46.635 "data_offset": 2048, 00:07:46.635 "data_size": 63488 00:07:46.635 }, 00:07:46.635 { 00:07:46.635 "name": "pt3", 00:07:46.635 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:46.635 "is_configured": true, 00:07:46.635 "data_offset": 2048, 00:07:46.635 "data_size": 63488 00:07:46.635 } 00:07:46.635 ] 00:07:46.635 } 00:07:46.635 } 00:07:46.635 }' 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:46.635 pt2 00:07:46.635 pt3' 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.635 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:46.636 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.636 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.636 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:46.636 [2024-12-06 11:50:44.388840] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.896 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.896 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7473a454-4691-47de-848a-671bb71a15d3 00:07:46.896 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7473a454-4691-47de-848a-671bb71a15d3 ']' 00:07:46.896 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:46.896 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.896 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.896 [2024-12-06 11:50:44.424541] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:46.896 [2024-12-06 11:50:44.424563] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:46.896 [2024-12-06 11:50:44.424629] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:46.896 [2024-12-06 11:50:44.424686] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:46.896 [2024-12-06 11:50:44.424696] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:07:46.896 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.896 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:46.896 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.896 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.896 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.896 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.896 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:46.896 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:46.896 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:46.896 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:46.896 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.896 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.896 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.896 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:46.896 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:46.896 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.896 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.896 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.896 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:46.896 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:07:46.896 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.897 [2024-12-06 11:50:44.556389] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:46.897 [2024-12-06 11:50:44.558286] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:46.897 [2024-12-06 11:50:44.558367] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:07:46.897 [2024-12-06 11:50:44.558446] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:46.897 [2024-12-06 11:50:44.558538] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:46.897 [2024-12-06 11:50:44.558594] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:07:46.897 [2024-12-06 11:50:44.558638] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:46.897 [2024-12-06 11:50:44.558670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:07:46.897 request: 00:07:46.897 { 00:07:46.897 "name": "raid_bdev1", 00:07:46.897 "raid_level": "raid0", 00:07:46.897 "base_bdevs": [ 00:07:46.897 "malloc1", 00:07:46.897 "malloc2", 00:07:46.897 "malloc3" 00:07:46.897 ], 00:07:46.897 "strip_size_kb": 64, 00:07:46.897 "superblock": false, 00:07:46.897 "method": "bdev_raid_create", 00:07:46.897 "req_id": 1 00:07:46.897 } 00:07:46.897 Got JSON-RPC error response 00:07:46.897 response: 00:07:46.897 { 00:07:46.897 "code": -17, 00:07:46.897 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:46.897 } 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.897 [2024-12-06 11:50:44.608221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:46.897 [2024-12-06 11:50:44.608313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.897 [2024-12-06 11:50:44.608343] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:46.897 [2024-12-06 11:50:44.608369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.897 [2024-12-06 11:50:44.610415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.897 [2024-12-06 11:50:44.610485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:46.897 [2024-12-06 11:50:44.610563] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:46.897 [2024-12-06 11:50:44.610640] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:46.897 pt1 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:46.897 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.157 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.157 "name": "raid_bdev1", 00:07:47.157 "uuid": "7473a454-4691-47de-848a-671bb71a15d3", 00:07:47.157 "strip_size_kb": 64, 00:07:47.157 "state": "configuring", 00:07:47.157 "raid_level": "raid0", 00:07:47.157 "superblock": true, 00:07:47.157 "num_base_bdevs": 3, 00:07:47.157 "num_base_bdevs_discovered": 1, 00:07:47.157 "num_base_bdevs_operational": 3, 00:07:47.157 "base_bdevs_list": [ 00:07:47.157 { 00:07:47.157 "name": "pt1", 00:07:47.157 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:47.157 "is_configured": true, 00:07:47.157 "data_offset": 2048, 00:07:47.157 "data_size": 63488 00:07:47.157 }, 00:07:47.157 { 00:07:47.157 "name": null, 00:07:47.157 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:47.157 "is_configured": false, 00:07:47.157 "data_offset": 2048, 00:07:47.157 "data_size": 63488 00:07:47.157 }, 00:07:47.157 { 00:07:47.157 "name": null, 00:07:47.157 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:47.157 "is_configured": false, 00:07:47.157 "data_offset": 2048, 00:07:47.157 "data_size": 63488 00:07:47.157 } 00:07:47.157 ] 00:07:47.157 }' 00:07:47.157 11:50:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.157 11:50:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.417 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:07:47.418 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:47.418 11:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.418 11:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.418 [2024-12-06 11:50:45.039495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:47.418 [2024-12-06 11:50:45.039621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.418 [2024-12-06 11:50:45.039659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:47.418 [2024-12-06 11:50:45.039693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.418 [2024-12-06 11:50:45.040086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.418 [2024-12-06 11:50:45.040143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:47.418 [2024-12-06 11:50:45.040244] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:47.418 [2024-12-06 11:50:45.040300] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:47.418 pt2 00:07:47.418 11:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.418 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:07:47.418 11:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.418 11:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.418 [2024-12-06 11:50:45.051465] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:07:47.418 11:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.418 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:07:47.418 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:47.418 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:47.418 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:47.418 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.418 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:47.418 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.418 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.418 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.418 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.418 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:47.418 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.418 11:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.418 11:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.418 11:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.418 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.418 "name": "raid_bdev1", 00:07:47.418 "uuid": "7473a454-4691-47de-848a-671bb71a15d3", 00:07:47.418 "strip_size_kb": 64, 00:07:47.418 "state": "configuring", 00:07:47.418 "raid_level": "raid0", 00:07:47.418 "superblock": true, 00:07:47.418 "num_base_bdevs": 3, 00:07:47.418 "num_base_bdevs_discovered": 1, 00:07:47.418 "num_base_bdevs_operational": 3, 00:07:47.418 "base_bdevs_list": [ 00:07:47.418 { 00:07:47.418 "name": "pt1", 00:07:47.418 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:47.418 "is_configured": true, 00:07:47.418 "data_offset": 2048, 00:07:47.418 "data_size": 63488 00:07:47.418 }, 00:07:47.418 { 00:07:47.418 "name": null, 00:07:47.418 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:47.418 "is_configured": false, 00:07:47.418 "data_offset": 0, 00:07:47.418 "data_size": 63488 00:07:47.418 }, 00:07:47.418 { 00:07:47.418 "name": null, 00:07:47.418 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:47.418 "is_configured": false, 00:07:47.418 "data_offset": 2048, 00:07:47.418 "data_size": 63488 00:07:47.418 } 00:07:47.418 ] 00:07:47.418 }' 00:07:47.418 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.418 11:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.983 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:47.983 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:47.983 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:47.983 11:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.983 11:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.983 [2024-12-06 11:50:45.463002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:47.983 [2024-12-06 11:50:45.463089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.983 [2024-12-06 11:50:45.463124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:47.983 [2024-12-06 11:50:45.463149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.983 [2024-12-06 11:50:45.463549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.983 [2024-12-06 11:50:45.463608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:47.983 [2024-12-06 11:50:45.463700] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:47.983 [2024-12-06 11:50:45.463744] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:47.983 pt2 00:07:47.983 11:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.983 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:47.983 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:47.983 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:47.983 11:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.983 11:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.984 [2024-12-06 11:50:45.470993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:47.984 [2024-12-06 11:50:45.471070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.984 [2024-12-06 11:50:45.471103] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:07:47.984 [2024-12-06 11:50:45.471129] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.984 [2024-12-06 11:50:45.471459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.984 [2024-12-06 11:50:45.471514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:47.984 [2024-12-06 11:50:45.471589] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:07:47.984 [2024-12-06 11:50:45.471643] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:47.984 [2024-12-06 11:50:45.471762] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:47.984 [2024-12-06 11:50:45.471799] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:47.984 [2024-12-06 11:50:45.472038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:47.984 [2024-12-06 11:50:45.472175] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:47.984 [2024-12-06 11:50:45.472214] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:47.984 [2024-12-06 11:50:45.472362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.984 pt3 00:07:47.984 11:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.984 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:47.984 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:47.984 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:47.984 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:47.984 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.984 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:47.984 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.984 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:47.984 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.984 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.984 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.984 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.984 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.984 11:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.984 11:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.984 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:47.984 11:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.984 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.984 "name": "raid_bdev1", 00:07:47.984 "uuid": "7473a454-4691-47de-848a-671bb71a15d3", 00:07:47.984 "strip_size_kb": 64, 00:07:47.984 "state": "online", 00:07:47.984 "raid_level": "raid0", 00:07:47.984 "superblock": true, 00:07:47.984 "num_base_bdevs": 3, 00:07:47.984 "num_base_bdevs_discovered": 3, 00:07:47.984 "num_base_bdevs_operational": 3, 00:07:47.984 "base_bdevs_list": [ 00:07:47.984 { 00:07:47.984 "name": "pt1", 00:07:47.984 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:47.984 "is_configured": true, 00:07:47.984 "data_offset": 2048, 00:07:47.984 "data_size": 63488 00:07:47.984 }, 00:07:47.984 { 00:07:47.984 "name": "pt2", 00:07:47.984 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:47.984 "is_configured": true, 00:07:47.984 "data_offset": 2048, 00:07:47.984 "data_size": 63488 00:07:47.984 }, 00:07:47.984 { 00:07:47.984 "name": "pt3", 00:07:47.984 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:47.984 "is_configured": true, 00:07:47.984 "data_offset": 2048, 00:07:47.984 "data_size": 63488 00:07:47.984 } 00:07:47.984 ] 00:07:47.984 }' 00:07:47.984 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.984 11:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.243 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:48.243 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:48.243 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:48.243 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:48.243 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:48.243 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:48.243 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:48.243 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:48.243 11:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.243 11:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.243 [2024-12-06 11:50:45.878582] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:48.243 11:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.243 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:48.243 "name": "raid_bdev1", 00:07:48.243 "aliases": [ 00:07:48.243 "7473a454-4691-47de-848a-671bb71a15d3" 00:07:48.243 ], 00:07:48.243 "product_name": "Raid Volume", 00:07:48.243 "block_size": 512, 00:07:48.243 "num_blocks": 190464, 00:07:48.243 "uuid": "7473a454-4691-47de-848a-671bb71a15d3", 00:07:48.243 "assigned_rate_limits": { 00:07:48.243 "rw_ios_per_sec": 0, 00:07:48.243 "rw_mbytes_per_sec": 0, 00:07:48.244 "r_mbytes_per_sec": 0, 00:07:48.244 "w_mbytes_per_sec": 0 00:07:48.244 }, 00:07:48.244 "claimed": false, 00:07:48.244 "zoned": false, 00:07:48.244 "supported_io_types": { 00:07:48.244 "read": true, 00:07:48.244 "write": true, 00:07:48.244 "unmap": true, 00:07:48.244 "flush": true, 00:07:48.244 "reset": true, 00:07:48.244 "nvme_admin": false, 00:07:48.244 "nvme_io": false, 00:07:48.244 "nvme_io_md": false, 00:07:48.244 "write_zeroes": true, 00:07:48.244 "zcopy": false, 00:07:48.244 "get_zone_info": false, 00:07:48.244 "zone_management": false, 00:07:48.244 "zone_append": false, 00:07:48.244 "compare": false, 00:07:48.244 "compare_and_write": false, 00:07:48.244 "abort": false, 00:07:48.244 "seek_hole": false, 00:07:48.244 "seek_data": false, 00:07:48.244 "copy": false, 00:07:48.244 "nvme_iov_md": false 00:07:48.244 }, 00:07:48.244 "memory_domains": [ 00:07:48.244 { 00:07:48.244 "dma_device_id": "system", 00:07:48.244 "dma_device_type": 1 00:07:48.244 }, 00:07:48.244 { 00:07:48.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.244 "dma_device_type": 2 00:07:48.244 }, 00:07:48.244 { 00:07:48.244 "dma_device_id": "system", 00:07:48.244 "dma_device_type": 1 00:07:48.244 }, 00:07:48.244 { 00:07:48.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.244 "dma_device_type": 2 00:07:48.244 }, 00:07:48.244 { 00:07:48.244 "dma_device_id": "system", 00:07:48.244 "dma_device_type": 1 00:07:48.244 }, 00:07:48.244 { 00:07:48.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.244 "dma_device_type": 2 00:07:48.244 } 00:07:48.244 ], 00:07:48.244 "driver_specific": { 00:07:48.244 "raid": { 00:07:48.244 "uuid": "7473a454-4691-47de-848a-671bb71a15d3", 00:07:48.244 "strip_size_kb": 64, 00:07:48.244 "state": "online", 00:07:48.244 "raid_level": "raid0", 00:07:48.244 "superblock": true, 00:07:48.244 "num_base_bdevs": 3, 00:07:48.244 "num_base_bdevs_discovered": 3, 00:07:48.244 "num_base_bdevs_operational": 3, 00:07:48.244 "base_bdevs_list": [ 00:07:48.244 { 00:07:48.244 "name": "pt1", 00:07:48.244 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:48.244 "is_configured": true, 00:07:48.244 "data_offset": 2048, 00:07:48.244 "data_size": 63488 00:07:48.244 }, 00:07:48.244 { 00:07:48.244 "name": "pt2", 00:07:48.244 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:48.244 "is_configured": true, 00:07:48.244 "data_offset": 2048, 00:07:48.244 "data_size": 63488 00:07:48.244 }, 00:07:48.244 { 00:07:48.244 "name": "pt3", 00:07:48.244 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:48.244 "is_configured": true, 00:07:48.244 "data_offset": 2048, 00:07:48.244 "data_size": 63488 00:07:48.244 } 00:07:48.244 ] 00:07:48.244 } 00:07:48.244 } 00:07:48.244 }' 00:07:48.244 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:48.244 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:48.244 pt2 00:07:48.244 pt3' 00:07:48.244 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.244 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:48.244 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.244 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:48.244 11:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.504 11:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.504 11:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.504 [2024-12-06 11:50:46.150053] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7473a454-4691-47de-848a-671bb71a15d3 '!=' 7473a454-4691-47de-848a-671bb71a15d3 ']' 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 75928 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 75928 ']' 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 75928 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75928 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75928' 00:07:48.504 killing process with pid 75928 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 75928 00:07:48.504 [2024-12-06 11:50:46.233522] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:48.504 [2024-12-06 11:50:46.233659] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.504 [2024-12-06 11:50:46.233760] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:48.504 11:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 75928 00:07:48.504 [2024-12-06 11:50:46.233805] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:48.764 [2024-12-06 11:50:46.268336] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:48.764 11:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:48.764 00:07:48.764 real 0m3.847s 00:07:48.764 user 0m6.034s 00:07:48.764 sys 0m0.780s 00:07:48.764 11:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.764 11:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.765 ************************************ 00:07:48.765 END TEST raid_superblock_test 00:07:48.765 ************************************ 00:07:49.024 11:50:46 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:07:49.024 11:50:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:49.024 11:50:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.024 11:50:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.024 ************************************ 00:07:49.024 START TEST raid_read_error_test 00:07:49.024 ************************************ 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.j37hUba18y 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76170 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76170 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 76170 ']' 00:07:49.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:49.025 11:50:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.025 [2024-12-06 11:50:46.682978] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:49.025 [2024-12-06 11:50:46.683098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76170 ] 00:07:49.285 [2024-12-06 11:50:46.828775] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.285 [2024-12-06 11:50:46.872778] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.285 [2024-12-06 11:50:46.915077] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.285 [2024-12-06 11:50:46.915108] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.856 BaseBdev1_malloc 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.856 true 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.856 [2024-12-06 11:50:47.513111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:49.856 [2024-12-06 11:50:47.513239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.856 [2024-12-06 11:50:47.513280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:49.856 [2024-12-06 11:50:47.513318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.856 [2024-12-06 11:50:47.515365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.856 [2024-12-06 11:50:47.515452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:49.856 BaseBdev1 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.856 BaseBdev2_malloc 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.856 true 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.856 [2024-12-06 11:50:47.568567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:49.856 [2024-12-06 11:50:47.568707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.856 [2024-12-06 11:50:47.568770] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:49.856 [2024-12-06 11:50:47.568823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.856 [2024-12-06 11:50:47.571915] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.856 [2024-12-06 11:50:47.572017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:49.856 BaseBdev2 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.856 BaseBdev3_malloc 00:07:49.856 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.857 11:50:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:07:49.857 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.857 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.857 true 00:07:49.857 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.857 11:50:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:07:49.857 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.857 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.857 [2024-12-06 11:50:47.609229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:07:49.857 [2024-12-06 11:50:47.609339] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.857 [2024-12-06 11:50:47.609362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:07:49.857 [2024-12-06 11:50:47.609371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.857 [2024-12-06 11:50:47.611356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.857 [2024-12-06 11:50:47.611390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:07:50.117 BaseBdev3 00:07:50.117 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.117 11:50:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:07:50.117 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.117 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.117 [2024-12-06 11:50:47.621293] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:50.117 [2024-12-06 11:50:47.623093] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:50.117 [2024-12-06 11:50:47.623225] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:50.117 [2024-12-06 11:50:47.623432] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:07:50.117 [2024-12-06 11:50:47.623483] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:50.117 [2024-12-06 11:50:47.623743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:50.117 [2024-12-06 11:50:47.623918] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:07:50.117 [2024-12-06 11:50:47.623959] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:07:50.117 [2024-12-06 11:50:47.624125] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.117 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.117 11:50:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:50.117 11:50:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.117 11:50:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.118 11:50:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:50.118 11:50:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.118 11:50:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:50.118 11:50:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.118 11:50:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.118 11:50:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.118 11:50:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.118 11:50:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.118 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.118 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.118 11:50:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.118 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.118 11:50:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.118 "name": "raid_bdev1", 00:07:50.118 "uuid": "f1caedfc-f227-487f-bca9-add76e6001a5", 00:07:50.118 "strip_size_kb": 64, 00:07:50.118 "state": "online", 00:07:50.118 "raid_level": "raid0", 00:07:50.118 "superblock": true, 00:07:50.118 "num_base_bdevs": 3, 00:07:50.118 "num_base_bdevs_discovered": 3, 00:07:50.118 "num_base_bdevs_operational": 3, 00:07:50.118 "base_bdevs_list": [ 00:07:50.118 { 00:07:50.118 "name": "BaseBdev1", 00:07:50.118 "uuid": "d6b1bd92-afb0-5ed0-b114-b9688190e81d", 00:07:50.118 "is_configured": true, 00:07:50.118 "data_offset": 2048, 00:07:50.118 "data_size": 63488 00:07:50.118 }, 00:07:50.118 { 00:07:50.118 "name": "BaseBdev2", 00:07:50.118 "uuid": "28b28549-cbe4-502b-a9d4-33d97723cea3", 00:07:50.118 "is_configured": true, 00:07:50.118 "data_offset": 2048, 00:07:50.118 "data_size": 63488 00:07:50.118 }, 00:07:50.118 { 00:07:50.118 "name": "BaseBdev3", 00:07:50.118 "uuid": "0f48c2bb-9a7a-5993-8c7b-192211995f26", 00:07:50.118 "is_configured": true, 00:07:50.118 "data_offset": 2048, 00:07:50.118 "data_size": 63488 00:07:50.118 } 00:07:50.118 ] 00:07:50.118 }' 00:07:50.118 11:50:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.118 11:50:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.378 11:50:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:50.378 11:50:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:50.638 [2024-12-06 11:50:48.168738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:07:51.578 11:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:51.578 11:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.578 11:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.578 11:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.578 11:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:51.578 11:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:51.578 11:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:07:51.578 11:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:51.578 11:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:51.578 11:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.578 11:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:51.578 11:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.578 11:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:51.578 11:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.578 11:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.578 11:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.578 11:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.578 11:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.578 11:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:51.578 11:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.578 11:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.578 11:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.578 11:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.578 "name": "raid_bdev1", 00:07:51.578 "uuid": "f1caedfc-f227-487f-bca9-add76e6001a5", 00:07:51.578 "strip_size_kb": 64, 00:07:51.578 "state": "online", 00:07:51.578 "raid_level": "raid0", 00:07:51.578 "superblock": true, 00:07:51.578 "num_base_bdevs": 3, 00:07:51.578 "num_base_bdevs_discovered": 3, 00:07:51.578 "num_base_bdevs_operational": 3, 00:07:51.578 "base_bdevs_list": [ 00:07:51.578 { 00:07:51.578 "name": "BaseBdev1", 00:07:51.578 "uuid": "d6b1bd92-afb0-5ed0-b114-b9688190e81d", 00:07:51.578 "is_configured": true, 00:07:51.578 "data_offset": 2048, 00:07:51.578 "data_size": 63488 00:07:51.578 }, 00:07:51.578 { 00:07:51.578 "name": "BaseBdev2", 00:07:51.578 "uuid": "28b28549-cbe4-502b-a9d4-33d97723cea3", 00:07:51.578 "is_configured": true, 00:07:51.578 "data_offset": 2048, 00:07:51.578 "data_size": 63488 00:07:51.578 }, 00:07:51.578 { 00:07:51.578 "name": "BaseBdev3", 00:07:51.578 "uuid": "0f48c2bb-9a7a-5993-8c7b-192211995f26", 00:07:51.578 "is_configured": true, 00:07:51.578 "data_offset": 2048, 00:07:51.578 "data_size": 63488 00:07:51.578 } 00:07:51.578 ] 00:07:51.578 }' 00:07:51.578 11:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.578 11:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.838 11:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:51.838 11:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.838 11:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.838 [2024-12-06 11:50:49.548578] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:51.838 [2024-12-06 11:50:49.548665] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:51.838 [2024-12-06 11:50:49.551102] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:51.838 [2024-12-06 11:50:49.551219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.838 [2024-12-06 11:50:49.551293] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:51.838 [2024-12-06 11:50:49.551349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:07:51.838 { 00:07:51.838 "results": [ 00:07:51.838 { 00:07:51.838 "job": "raid_bdev1", 00:07:51.838 "core_mask": "0x1", 00:07:51.838 "workload": "randrw", 00:07:51.838 "percentage": 50, 00:07:51.838 "status": "finished", 00:07:51.838 "queue_depth": 1, 00:07:51.838 "io_size": 131072, 00:07:51.838 "runtime": 1.380833, 00:07:51.838 "iops": 17606.763453654425, 00:07:51.838 "mibps": 2200.845431706803, 00:07:51.838 "io_failed": 1, 00:07:51.838 "io_timeout": 0, 00:07:51.838 "avg_latency_us": 78.65595823895674, 00:07:51.838 "min_latency_us": 18.55720524017467, 00:07:51.838 "max_latency_us": 1380.8349344978167 00:07:51.838 } 00:07:51.838 ], 00:07:51.838 "core_count": 1 00:07:51.838 } 00:07:51.838 11:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.838 11:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76170 00:07:51.838 11:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 76170 ']' 00:07:51.838 11:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 76170 00:07:51.838 11:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:51.838 11:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:51.838 11:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76170 00:07:51.838 11:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:51.838 11:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:51.838 11:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76170' 00:07:51.838 killing process with pid 76170 00:07:51.838 11:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 76170 00:07:51.838 [2024-12-06 11:50:49.589835] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:51.838 11:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 76170 00:07:52.097 [2024-12-06 11:50:49.615672] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:52.097 11:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.j37hUba18y 00:07:52.097 11:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:52.097 11:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:52.356 11:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:52.356 11:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:52.356 11:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:52.356 11:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:52.356 11:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:52.356 00:07:52.356 real 0m3.282s 00:07:52.356 user 0m4.161s 00:07:52.356 sys 0m0.490s 00:07:52.356 11:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.356 11:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.356 ************************************ 00:07:52.356 END TEST raid_read_error_test 00:07:52.356 ************************************ 00:07:52.356 11:50:49 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:07:52.356 11:50:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:52.356 11:50:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.356 11:50:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:52.356 ************************************ 00:07:52.356 START TEST raid_write_error_test 00:07:52.356 ************************************ 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Jm9h3lDZIr 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76299 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76299 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 76299 ']' 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:52.356 11:50:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.356 [2024-12-06 11:50:50.020671] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:52.356 [2024-12-06 11:50:50.020784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76299 ] 00:07:52.615 [2024-12-06 11:50:50.163978] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.615 [2024-12-06 11:50:50.207550] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.615 [2024-12-06 11:50:50.249745] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.615 [2024-12-06 11:50:50.249852] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.184 BaseBdev1_malloc 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.184 true 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.184 [2024-12-06 11:50:50.871932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:53.184 [2024-12-06 11:50:50.872057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.184 [2024-12-06 11:50:50.872101] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:53.184 [2024-12-06 11:50:50.872134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.184 [2024-12-06 11:50:50.874199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.184 [2024-12-06 11:50:50.874278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:53.184 BaseBdev1 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.184 BaseBdev2_malloc 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.184 true 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.184 [2024-12-06 11:50:50.929140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:53.184 [2024-12-06 11:50:50.929305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.184 [2024-12-06 11:50:50.929374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:53.184 [2024-12-06 11:50:50.929430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.184 [2024-12-06 11:50:50.932718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.184 BaseBdev2 00:07:53.184 [2024-12-06 11:50:50.932824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.184 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.443 BaseBdev3_malloc 00:07:53.443 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.443 11:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:07:53.443 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.443 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.443 true 00:07:53.443 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.443 11:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:07:53.443 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.443 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.443 [2024-12-06 11:50:50.970370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:07:53.443 [2024-12-06 11:50:50.970451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.443 [2024-12-06 11:50:50.970485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:07:53.443 [2024-12-06 11:50:50.970511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.443 [2024-12-06 11:50:50.972531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.443 [2024-12-06 11:50:50.972598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:07:53.443 BaseBdev3 00:07:53.443 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.443 11:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:07:53.443 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.443 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.443 [2024-12-06 11:50:50.982424] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:53.443 [2024-12-06 11:50:50.984272] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:53.443 [2024-12-06 11:50:50.984393] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:53.443 [2024-12-06 11:50:50.984576] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:07:53.443 [2024-12-06 11:50:50.984620] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:53.443 [2024-12-06 11:50:50.984890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:53.443 [2024-12-06 11:50:50.985067] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:07:53.443 [2024-12-06 11:50:50.985108] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:07:53.444 [2024-12-06 11:50:50.985264] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.444 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.444 11:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:53.444 11:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:53.444 11:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:53.444 11:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:53.444 11:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.444 11:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:53.444 11:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.444 11:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.444 11:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.444 11:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.444 11:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.444 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.444 11:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:53.444 11:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.444 11:50:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.444 11:50:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.444 "name": "raid_bdev1", 00:07:53.444 "uuid": "8dc14f6a-830d-4ce9-87d9-4d981abd2387", 00:07:53.444 "strip_size_kb": 64, 00:07:53.444 "state": "online", 00:07:53.444 "raid_level": "raid0", 00:07:53.444 "superblock": true, 00:07:53.444 "num_base_bdevs": 3, 00:07:53.444 "num_base_bdevs_discovered": 3, 00:07:53.444 "num_base_bdevs_operational": 3, 00:07:53.444 "base_bdevs_list": [ 00:07:53.444 { 00:07:53.444 "name": "BaseBdev1", 00:07:53.444 "uuid": "b73e77b9-a428-5000-9f44-3322b39b8ec1", 00:07:53.444 "is_configured": true, 00:07:53.444 "data_offset": 2048, 00:07:53.444 "data_size": 63488 00:07:53.444 }, 00:07:53.444 { 00:07:53.444 "name": "BaseBdev2", 00:07:53.444 "uuid": "3b4476f9-96b3-5ed7-9568-025603a3dcc7", 00:07:53.444 "is_configured": true, 00:07:53.444 "data_offset": 2048, 00:07:53.444 "data_size": 63488 00:07:53.444 }, 00:07:53.444 { 00:07:53.444 "name": "BaseBdev3", 00:07:53.444 "uuid": "e06d3d95-2574-5790-9714-53e29a6a4212", 00:07:53.444 "is_configured": true, 00:07:53.444 "data_offset": 2048, 00:07:53.444 "data_size": 63488 00:07:53.444 } 00:07:53.444 ] 00:07:53.444 }' 00:07:53.444 11:50:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.444 11:50:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.709 11:50:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:53.709 11:50:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:53.982 [2024-12-06 11:50:51.521876] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:07:54.921 11:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:54.921 11:50:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.921 11:50:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.921 11:50:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.921 11:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:54.921 11:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:54.921 11:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:07:54.921 11:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:54.921 11:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:54.921 11:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.921 11:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:54.921 11:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.921 11:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:54.921 11:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.921 11:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.921 11:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.921 11:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.921 11:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.921 11:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:54.921 11:50:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.921 11:50:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.921 11:50:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.921 11:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.921 "name": "raid_bdev1", 00:07:54.921 "uuid": "8dc14f6a-830d-4ce9-87d9-4d981abd2387", 00:07:54.921 "strip_size_kb": 64, 00:07:54.921 "state": "online", 00:07:54.921 "raid_level": "raid0", 00:07:54.921 "superblock": true, 00:07:54.921 "num_base_bdevs": 3, 00:07:54.921 "num_base_bdevs_discovered": 3, 00:07:54.921 "num_base_bdevs_operational": 3, 00:07:54.921 "base_bdevs_list": [ 00:07:54.921 { 00:07:54.921 "name": "BaseBdev1", 00:07:54.921 "uuid": "b73e77b9-a428-5000-9f44-3322b39b8ec1", 00:07:54.921 "is_configured": true, 00:07:54.921 "data_offset": 2048, 00:07:54.921 "data_size": 63488 00:07:54.921 }, 00:07:54.921 { 00:07:54.921 "name": "BaseBdev2", 00:07:54.921 "uuid": "3b4476f9-96b3-5ed7-9568-025603a3dcc7", 00:07:54.921 "is_configured": true, 00:07:54.921 "data_offset": 2048, 00:07:54.921 "data_size": 63488 00:07:54.921 }, 00:07:54.921 { 00:07:54.921 "name": "BaseBdev3", 00:07:54.921 "uuid": "e06d3d95-2574-5790-9714-53e29a6a4212", 00:07:54.921 "is_configured": true, 00:07:54.921 "data_offset": 2048, 00:07:54.921 "data_size": 63488 00:07:54.921 } 00:07:54.921 ] 00:07:54.921 }' 00:07:54.921 11:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.921 11:50:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.181 11:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:55.181 11:50:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.181 11:50:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.181 [2024-12-06 11:50:52.869206] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:55.181 [2024-12-06 11:50:52.869299] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:55.181 [2024-12-06 11:50:52.871750] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:55.181 [2024-12-06 11:50:52.871796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.181 [2024-12-06 11:50:52.871829] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:55.181 [2024-12-06 11:50:52.871839] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:07:55.181 { 00:07:55.181 "results": [ 00:07:55.181 { 00:07:55.181 "job": "raid_bdev1", 00:07:55.181 "core_mask": "0x1", 00:07:55.181 "workload": "randrw", 00:07:55.181 "percentage": 50, 00:07:55.181 "status": "finished", 00:07:55.181 "queue_depth": 1, 00:07:55.181 "io_size": 131072, 00:07:55.181 "runtime": 1.348251, 00:07:55.181 "iops": 17400.320860136577, 00:07:55.181 "mibps": 2175.040107517072, 00:07:55.181 "io_failed": 1, 00:07:55.181 "io_timeout": 0, 00:07:55.181 "avg_latency_us": 79.537568563568, 00:07:55.181 "min_latency_us": 24.929257641921396, 00:07:55.181 "max_latency_us": 1316.4436681222708 00:07:55.181 } 00:07:55.181 ], 00:07:55.181 "core_count": 1 00:07:55.181 } 00:07:55.181 11:50:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.181 11:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76299 00:07:55.181 11:50:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 76299 ']' 00:07:55.181 11:50:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 76299 00:07:55.181 11:50:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:55.181 11:50:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:55.181 11:50:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76299 00:07:55.181 killing process with pid 76299 00:07:55.181 11:50:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:55.181 11:50:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:55.181 11:50:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76299' 00:07:55.181 11:50:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 76299 00:07:55.181 [2024-12-06 11:50:52.908718] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:55.181 11:50:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 76299 00:07:55.181 [2024-12-06 11:50:52.933747] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:55.441 11:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Jm9h3lDZIr 00:07:55.441 11:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:55.441 11:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:55.441 ************************************ 00:07:55.441 END TEST raid_write_error_test 00:07:55.441 ************************************ 00:07:55.441 11:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:55.441 11:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:55.441 11:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:55.441 11:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:55.441 11:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:55.441 00:07:55.441 real 0m3.250s 00:07:55.441 user 0m4.095s 00:07:55.441 sys 0m0.515s 00:07:55.441 11:50:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.441 11:50:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.700 11:50:53 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:55.700 11:50:53 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:07:55.700 11:50:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:55.700 11:50:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.700 11:50:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:55.700 ************************************ 00:07:55.700 START TEST raid_state_function_test 00:07:55.700 ************************************ 00:07:55.700 11:50:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:07:55.700 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:55.700 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:55.700 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:55.700 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:55.700 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:55.700 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:55.700 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:55.700 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:55.700 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:55.700 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:55.700 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:55.700 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:55.700 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:55.700 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:55.701 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:55.701 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:55.701 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:55.701 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:55.701 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:55.701 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:55.701 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:55.701 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:55.701 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:55.701 Process raid pid: 76426 00:07:55.701 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:55.701 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:55.701 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:55.701 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=76426 00:07:55.701 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:55.701 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76426' 00:07:55.701 11:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 76426 00:07:55.701 11:50:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 76426 ']' 00:07:55.701 11:50:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.701 11:50:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:55.701 11:50:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.701 11:50:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:55.701 11:50:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.701 [2024-12-06 11:50:53.333055] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:55.701 [2024-12-06 11:50:53.333274] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.961 [2024-12-06 11:50:53.477786] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.961 [2024-12-06 11:50:53.521743] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.961 [2024-12-06 11:50:53.563137] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.961 [2024-12-06 11:50:53.563283] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.533 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:56.533 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:56.533 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:56.533 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.533 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.533 [2024-12-06 11:50:54.155960] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:56.533 [2024-12-06 11:50:54.156065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:56.533 [2024-12-06 11:50:54.156097] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:56.533 [2024-12-06 11:50:54.156120] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:56.533 [2024-12-06 11:50:54.156137] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:56.533 [2024-12-06 11:50:54.156162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:56.533 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.533 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:56.533 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.533 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:56.533 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:56.533 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.533 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:56.533 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.533 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.533 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.533 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.533 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.533 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.533 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.533 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.533 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.533 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.533 "name": "Existed_Raid", 00:07:56.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.533 "strip_size_kb": 64, 00:07:56.533 "state": "configuring", 00:07:56.533 "raid_level": "concat", 00:07:56.533 "superblock": false, 00:07:56.533 "num_base_bdevs": 3, 00:07:56.533 "num_base_bdevs_discovered": 0, 00:07:56.533 "num_base_bdevs_operational": 3, 00:07:56.533 "base_bdevs_list": [ 00:07:56.533 { 00:07:56.533 "name": "BaseBdev1", 00:07:56.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.533 "is_configured": false, 00:07:56.533 "data_offset": 0, 00:07:56.533 "data_size": 0 00:07:56.533 }, 00:07:56.533 { 00:07:56.533 "name": "BaseBdev2", 00:07:56.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.533 "is_configured": false, 00:07:56.533 "data_offset": 0, 00:07:56.533 "data_size": 0 00:07:56.533 }, 00:07:56.533 { 00:07:56.533 "name": "BaseBdev3", 00:07:56.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.533 "is_configured": false, 00:07:56.533 "data_offset": 0, 00:07:56.533 "data_size": 0 00:07:56.533 } 00:07:56.533 ] 00:07:56.533 }' 00:07:56.533 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.533 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.104 [2024-12-06 11:50:54.567192] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:57.104 [2024-12-06 11:50:54.567297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.104 [2024-12-06 11:50:54.579203] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:57.104 [2024-12-06 11:50:54.579309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:57.104 [2024-12-06 11:50:54.579339] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.104 [2024-12-06 11:50:54.579361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.104 [2024-12-06 11:50:54.579393] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:57.104 [2024-12-06 11:50:54.579421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.104 BaseBdev1 00:07:57.104 [2024-12-06 11:50:54.599823] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.104 [ 00:07:57.104 { 00:07:57.104 "name": "BaseBdev1", 00:07:57.104 "aliases": [ 00:07:57.104 "3a55da04-fd73-4aaa-8116-2b52871478a6" 00:07:57.104 ], 00:07:57.104 "product_name": "Malloc disk", 00:07:57.104 "block_size": 512, 00:07:57.104 "num_blocks": 65536, 00:07:57.104 "uuid": "3a55da04-fd73-4aaa-8116-2b52871478a6", 00:07:57.104 "assigned_rate_limits": { 00:07:57.104 "rw_ios_per_sec": 0, 00:07:57.104 "rw_mbytes_per_sec": 0, 00:07:57.104 "r_mbytes_per_sec": 0, 00:07:57.104 "w_mbytes_per_sec": 0 00:07:57.104 }, 00:07:57.104 "claimed": true, 00:07:57.104 "claim_type": "exclusive_write", 00:07:57.104 "zoned": false, 00:07:57.104 "supported_io_types": { 00:07:57.104 "read": true, 00:07:57.104 "write": true, 00:07:57.104 "unmap": true, 00:07:57.104 "flush": true, 00:07:57.104 "reset": true, 00:07:57.104 "nvme_admin": false, 00:07:57.104 "nvme_io": false, 00:07:57.104 "nvme_io_md": false, 00:07:57.104 "write_zeroes": true, 00:07:57.104 "zcopy": true, 00:07:57.104 "get_zone_info": false, 00:07:57.104 "zone_management": false, 00:07:57.104 "zone_append": false, 00:07:57.104 "compare": false, 00:07:57.104 "compare_and_write": false, 00:07:57.104 "abort": true, 00:07:57.104 "seek_hole": false, 00:07:57.104 "seek_data": false, 00:07:57.104 "copy": true, 00:07:57.104 "nvme_iov_md": false 00:07:57.104 }, 00:07:57.104 "memory_domains": [ 00:07:57.104 { 00:07:57.104 "dma_device_id": "system", 00:07:57.104 "dma_device_type": 1 00:07:57.104 }, 00:07:57.104 { 00:07:57.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.104 "dma_device_type": 2 00:07:57.104 } 00:07:57.104 ], 00:07:57.104 "driver_specific": {} 00:07:57.104 } 00:07:57.104 ] 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.104 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.104 "name": "Existed_Raid", 00:07:57.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.104 "strip_size_kb": 64, 00:07:57.104 "state": "configuring", 00:07:57.104 "raid_level": "concat", 00:07:57.104 "superblock": false, 00:07:57.104 "num_base_bdevs": 3, 00:07:57.105 "num_base_bdevs_discovered": 1, 00:07:57.105 "num_base_bdevs_operational": 3, 00:07:57.105 "base_bdevs_list": [ 00:07:57.105 { 00:07:57.105 "name": "BaseBdev1", 00:07:57.105 "uuid": "3a55da04-fd73-4aaa-8116-2b52871478a6", 00:07:57.105 "is_configured": true, 00:07:57.105 "data_offset": 0, 00:07:57.105 "data_size": 65536 00:07:57.105 }, 00:07:57.105 { 00:07:57.105 "name": "BaseBdev2", 00:07:57.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.105 "is_configured": false, 00:07:57.105 "data_offset": 0, 00:07:57.105 "data_size": 0 00:07:57.105 }, 00:07:57.105 { 00:07:57.105 "name": "BaseBdev3", 00:07:57.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.105 "is_configured": false, 00:07:57.105 "data_offset": 0, 00:07:57.105 "data_size": 0 00:07:57.105 } 00:07:57.105 ] 00:07:57.105 }' 00:07:57.105 11:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.105 11:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.364 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:57.364 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.364 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.364 [2024-12-06 11:50:55.043086] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:57.365 [2024-12-06 11:50:55.043162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:57.365 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.365 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:57.365 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.365 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.365 [2024-12-06 11:50:55.055124] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:57.365 [2024-12-06 11:50:55.056878] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.365 [2024-12-06 11:50:55.056921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.365 [2024-12-06 11:50:55.056931] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:57.365 [2024-12-06 11:50:55.056941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:57.365 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.365 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:57.365 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:57.365 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:57.365 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.365 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.365 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:57.365 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.365 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:57.365 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.365 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.365 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.365 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.365 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.365 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.365 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.365 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.365 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.365 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.365 "name": "Existed_Raid", 00:07:57.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.365 "strip_size_kb": 64, 00:07:57.365 "state": "configuring", 00:07:57.365 "raid_level": "concat", 00:07:57.365 "superblock": false, 00:07:57.365 "num_base_bdevs": 3, 00:07:57.365 "num_base_bdevs_discovered": 1, 00:07:57.365 "num_base_bdevs_operational": 3, 00:07:57.365 "base_bdevs_list": [ 00:07:57.365 { 00:07:57.365 "name": "BaseBdev1", 00:07:57.365 "uuid": "3a55da04-fd73-4aaa-8116-2b52871478a6", 00:07:57.365 "is_configured": true, 00:07:57.365 "data_offset": 0, 00:07:57.365 "data_size": 65536 00:07:57.365 }, 00:07:57.365 { 00:07:57.365 "name": "BaseBdev2", 00:07:57.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.365 "is_configured": false, 00:07:57.365 "data_offset": 0, 00:07:57.365 "data_size": 0 00:07:57.365 }, 00:07:57.365 { 00:07:57.365 "name": "BaseBdev3", 00:07:57.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.365 "is_configured": false, 00:07:57.365 "data_offset": 0, 00:07:57.365 "data_size": 0 00:07:57.365 } 00:07:57.365 ] 00:07:57.365 }' 00:07:57.365 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.365 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.947 [2024-12-06 11:50:55.462636] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:57.947 BaseBdev2 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.947 [ 00:07:57.947 { 00:07:57.947 "name": "BaseBdev2", 00:07:57.947 "aliases": [ 00:07:57.947 "ee465616-4c84-478a-a084-059c7c2ee8a1" 00:07:57.947 ], 00:07:57.947 "product_name": "Malloc disk", 00:07:57.947 "block_size": 512, 00:07:57.947 "num_blocks": 65536, 00:07:57.947 "uuid": "ee465616-4c84-478a-a084-059c7c2ee8a1", 00:07:57.947 "assigned_rate_limits": { 00:07:57.947 "rw_ios_per_sec": 0, 00:07:57.947 "rw_mbytes_per_sec": 0, 00:07:57.947 "r_mbytes_per_sec": 0, 00:07:57.947 "w_mbytes_per_sec": 0 00:07:57.947 }, 00:07:57.947 "claimed": true, 00:07:57.947 "claim_type": "exclusive_write", 00:07:57.947 "zoned": false, 00:07:57.947 "supported_io_types": { 00:07:57.947 "read": true, 00:07:57.947 "write": true, 00:07:57.947 "unmap": true, 00:07:57.947 "flush": true, 00:07:57.947 "reset": true, 00:07:57.947 "nvme_admin": false, 00:07:57.947 "nvme_io": false, 00:07:57.947 "nvme_io_md": false, 00:07:57.947 "write_zeroes": true, 00:07:57.947 "zcopy": true, 00:07:57.947 "get_zone_info": false, 00:07:57.947 "zone_management": false, 00:07:57.947 "zone_append": false, 00:07:57.947 "compare": false, 00:07:57.947 "compare_and_write": false, 00:07:57.947 "abort": true, 00:07:57.947 "seek_hole": false, 00:07:57.947 "seek_data": false, 00:07:57.947 "copy": true, 00:07:57.947 "nvme_iov_md": false 00:07:57.947 }, 00:07:57.947 "memory_domains": [ 00:07:57.947 { 00:07:57.947 "dma_device_id": "system", 00:07:57.947 "dma_device_type": 1 00:07:57.947 }, 00:07:57.947 { 00:07:57.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.947 "dma_device_type": 2 00:07:57.947 } 00:07:57.947 ], 00:07:57.947 "driver_specific": {} 00:07:57.947 } 00:07:57.947 ] 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.947 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.948 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.948 "name": "Existed_Raid", 00:07:57.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.948 "strip_size_kb": 64, 00:07:57.948 "state": "configuring", 00:07:57.948 "raid_level": "concat", 00:07:57.948 "superblock": false, 00:07:57.948 "num_base_bdevs": 3, 00:07:57.948 "num_base_bdevs_discovered": 2, 00:07:57.948 "num_base_bdevs_operational": 3, 00:07:57.948 "base_bdevs_list": [ 00:07:57.948 { 00:07:57.948 "name": "BaseBdev1", 00:07:57.948 "uuid": "3a55da04-fd73-4aaa-8116-2b52871478a6", 00:07:57.948 "is_configured": true, 00:07:57.948 "data_offset": 0, 00:07:57.948 "data_size": 65536 00:07:57.948 }, 00:07:57.948 { 00:07:57.948 "name": "BaseBdev2", 00:07:57.948 "uuid": "ee465616-4c84-478a-a084-059c7c2ee8a1", 00:07:57.948 "is_configured": true, 00:07:57.948 "data_offset": 0, 00:07:57.948 "data_size": 65536 00:07:57.948 }, 00:07:57.948 { 00:07:57.948 "name": "BaseBdev3", 00:07:57.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.948 "is_configured": false, 00:07:57.948 "data_offset": 0, 00:07:57.948 "data_size": 0 00:07:57.948 } 00:07:57.948 ] 00:07:57.948 }' 00:07:57.948 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.948 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.518 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:58.518 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.518 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.518 [2024-12-06 11:50:55.996890] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:58.518 [2024-12-06 11:50:55.997000] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:58.518 [2024-12-06 11:50:55.997029] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:58.518 [2024-12-06 11:50:55.997353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:58.518 [2024-12-06 11:50:55.997547] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:58.518 [2024-12-06 11:50:55.997589] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:58.518 [2024-12-06 11:50:55.997803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.518 BaseBdev3 00:07:58.518 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.518 11:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:58.518 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:58.518 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:58.518 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:58.518 11:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:58.518 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:58.518 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:58.518 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.518 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.518 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.518 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:58.518 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.518 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.518 [ 00:07:58.518 { 00:07:58.518 "name": "BaseBdev3", 00:07:58.518 "aliases": [ 00:07:58.518 "d532e55c-64c0-4b6b-a7cc-0b143c7d45ec" 00:07:58.518 ], 00:07:58.518 "product_name": "Malloc disk", 00:07:58.518 "block_size": 512, 00:07:58.518 "num_blocks": 65536, 00:07:58.518 "uuid": "d532e55c-64c0-4b6b-a7cc-0b143c7d45ec", 00:07:58.518 "assigned_rate_limits": { 00:07:58.518 "rw_ios_per_sec": 0, 00:07:58.518 "rw_mbytes_per_sec": 0, 00:07:58.518 "r_mbytes_per_sec": 0, 00:07:58.518 "w_mbytes_per_sec": 0 00:07:58.518 }, 00:07:58.518 "claimed": true, 00:07:58.518 "claim_type": "exclusive_write", 00:07:58.518 "zoned": false, 00:07:58.518 "supported_io_types": { 00:07:58.518 "read": true, 00:07:58.518 "write": true, 00:07:58.518 "unmap": true, 00:07:58.518 "flush": true, 00:07:58.518 "reset": true, 00:07:58.518 "nvme_admin": false, 00:07:58.518 "nvme_io": false, 00:07:58.518 "nvme_io_md": false, 00:07:58.518 "write_zeroes": true, 00:07:58.518 "zcopy": true, 00:07:58.518 "get_zone_info": false, 00:07:58.518 "zone_management": false, 00:07:58.518 "zone_append": false, 00:07:58.518 "compare": false, 00:07:58.518 "compare_and_write": false, 00:07:58.519 "abort": true, 00:07:58.519 "seek_hole": false, 00:07:58.519 "seek_data": false, 00:07:58.519 "copy": true, 00:07:58.519 "nvme_iov_md": false 00:07:58.519 }, 00:07:58.519 "memory_domains": [ 00:07:58.519 { 00:07:58.519 "dma_device_id": "system", 00:07:58.519 "dma_device_type": 1 00:07:58.519 }, 00:07:58.519 { 00:07:58.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.519 "dma_device_type": 2 00:07:58.519 } 00:07:58.519 ], 00:07:58.519 "driver_specific": {} 00:07:58.519 } 00:07:58.519 ] 00:07:58.519 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.519 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:58.519 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:58.519 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:58.519 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:07:58.519 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.519 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.519 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:58.519 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.519 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:58.519 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.519 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.519 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.519 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.519 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.519 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.519 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.519 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.519 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.519 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.519 "name": "Existed_Raid", 00:07:58.519 "uuid": "d2da4645-627d-4722-ae96-95dde57d55bb", 00:07:58.519 "strip_size_kb": 64, 00:07:58.519 "state": "online", 00:07:58.519 "raid_level": "concat", 00:07:58.519 "superblock": false, 00:07:58.519 "num_base_bdevs": 3, 00:07:58.519 "num_base_bdevs_discovered": 3, 00:07:58.519 "num_base_bdevs_operational": 3, 00:07:58.519 "base_bdevs_list": [ 00:07:58.519 { 00:07:58.519 "name": "BaseBdev1", 00:07:58.519 "uuid": "3a55da04-fd73-4aaa-8116-2b52871478a6", 00:07:58.519 "is_configured": true, 00:07:58.519 "data_offset": 0, 00:07:58.519 "data_size": 65536 00:07:58.519 }, 00:07:58.519 { 00:07:58.519 "name": "BaseBdev2", 00:07:58.519 "uuid": "ee465616-4c84-478a-a084-059c7c2ee8a1", 00:07:58.519 "is_configured": true, 00:07:58.519 "data_offset": 0, 00:07:58.519 "data_size": 65536 00:07:58.519 }, 00:07:58.519 { 00:07:58.519 "name": "BaseBdev3", 00:07:58.519 "uuid": "d532e55c-64c0-4b6b-a7cc-0b143c7d45ec", 00:07:58.519 "is_configured": true, 00:07:58.519 "data_offset": 0, 00:07:58.519 "data_size": 65536 00:07:58.519 } 00:07:58.519 ] 00:07:58.519 }' 00:07:58.519 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.519 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.779 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:58.779 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:58.779 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:58.779 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:58.779 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:58.779 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:58.779 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:58.779 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.779 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.779 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:58.779 [2024-12-06 11:50:56.436509] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.779 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.779 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:58.779 "name": "Existed_Raid", 00:07:58.779 "aliases": [ 00:07:58.779 "d2da4645-627d-4722-ae96-95dde57d55bb" 00:07:58.779 ], 00:07:58.779 "product_name": "Raid Volume", 00:07:58.779 "block_size": 512, 00:07:58.779 "num_blocks": 196608, 00:07:58.779 "uuid": "d2da4645-627d-4722-ae96-95dde57d55bb", 00:07:58.779 "assigned_rate_limits": { 00:07:58.779 "rw_ios_per_sec": 0, 00:07:58.779 "rw_mbytes_per_sec": 0, 00:07:58.779 "r_mbytes_per_sec": 0, 00:07:58.779 "w_mbytes_per_sec": 0 00:07:58.779 }, 00:07:58.779 "claimed": false, 00:07:58.779 "zoned": false, 00:07:58.779 "supported_io_types": { 00:07:58.779 "read": true, 00:07:58.779 "write": true, 00:07:58.779 "unmap": true, 00:07:58.779 "flush": true, 00:07:58.779 "reset": true, 00:07:58.779 "nvme_admin": false, 00:07:58.779 "nvme_io": false, 00:07:58.779 "nvme_io_md": false, 00:07:58.779 "write_zeroes": true, 00:07:58.779 "zcopy": false, 00:07:58.779 "get_zone_info": false, 00:07:58.779 "zone_management": false, 00:07:58.779 "zone_append": false, 00:07:58.779 "compare": false, 00:07:58.779 "compare_and_write": false, 00:07:58.779 "abort": false, 00:07:58.779 "seek_hole": false, 00:07:58.779 "seek_data": false, 00:07:58.779 "copy": false, 00:07:58.779 "nvme_iov_md": false 00:07:58.779 }, 00:07:58.779 "memory_domains": [ 00:07:58.779 { 00:07:58.779 "dma_device_id": "system", 00:07:58.779 "dma_device_type": 1 00:07:58.779 }, 00:07:58.779 { 00:07:58.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.779 "dma_device_type": 2 00:07:58.779 }, 00:07:58.779 { 00:07:58.779 "dma_device_id": "system", 00:07:58.779 "dma_device_type": 1 00:07:58.779 }, 00:07:58.779 { 00:07:58.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.779 "dma_device_type": 2 00:07:58.779 }, 00:07:58.779 { 00:07:58.779 "dma_device_id": "system", 00:07:58.779 "dma_device_type": 1 00:07:58.779 }, 00:07:58.779 { 00:07:58.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.779 "dma_device_type": 2 00:07:58.779 } 00:07:58.779 ], 00:07:58.779 "driver_specific": { 00:07:58.779 "raid": { 00:07:58.779 "uuid": "d2da4645-627d-4722-ae96-95dde57d55bb", 00:07:58.779 "strip_size_kb": 64, 00:07:58.779 "state": "online", 00:07:58.779 "raid_level": "concat", 00:07:58.779 "superblock": false, 00:07:58.779 "num_base_bdevs": 3, 00:07:58.779 "num_base_bdevs_discovered": 3, 00:07:58.779 "num_base_bdevs_operational": 3, 00:07:58.779 "base_bdevs_list": [ 00:07:58.779 { 00:07:58.779 "name": "BaseBdev1", 00:07:58.779 "uuid": "3a55da04-fd73-4aaa-8116-2b52871478a6", 00:07:58.779 "is_configured": true, 00:07:58.779 "data_offset": 0, 00:07:58.780 "data_size": 65536 00:07:58.780 }, 00:07:58.780 { 00:07:58.780 "name": "BaseBdev2", 00:07:58.780 "uuid": "ee465616-4c84-478a-a084-059c7c2ee8a1", 00:07:58.780 "is_configured": true, 00:07:58.780 "data_offset": 0, 00:07:58.780 "data_size": 65536 00:07:58.780 }, 00:07:58.780 { 00:07:58.780 "name": "BaseBdev3", 00:07:58.780 "uuid": "d532e55c-64c0-4b6b-a7cc-0b143c7d45ec", 00:07:58.780 "is_configured": true, 00:07:58.780 "data_offset": 0, 00:07:58.780 "data_size": 65536 00:07:58.780 } 00:07:58.780 ] 00:07:58.780 } 00:07:58.780 } 00:07:58.780 }' 00:07:58.780 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:58.780 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:58.780 BaseBdev2 00:07:58.780 BaseBdev3' 00:07:58.780 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.040 [2024-12-06 11:50:56.683808] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:59.040 [2024-12-06 11:50:56.683875] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.040 [2024-12-06 11:50:56.683951] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.040 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.041 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.041 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.041 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.041 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.041 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.041 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.041 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.041 "name": "Existed_Raid", 00:07:59.041 "uuid": "d2da4645-627d-4722-ae96-95dde57d55bb", 00:07:59.041 "strip_size_kb": 64, 00:07:59.041 "state": "offline", 00:07:59.041 "raid_level": "concat", 00:07:59.041 "superblock": false, 00:07:59.041 "num_base_bdevs": 3, 00:07:59.041 "num_base_bdevs_discovered": 2, 00:07:59.041 "num_base_bdevs_operational": 2, 00:07:59.041 "base_bdevs_list": [ 00:07:59.041 { 00:07:59.041 "name": null, 00:07:59.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.041 "is_configured": false, 00:07:59.041 "data_offset": 0, 00:07:59.041 "data_size": 65536 00:07:59.041 }, 00:07:59.041 { 00:07:59.041 "name": "BaseBdev2", 00:07:59.041 "uuid": "ee465616-4c84-478a-a084-059c7c2ee8a1", 00:07:59.041 "is_configured": true, 00:07:59.041 "data_offset": 0, 00:07:59.041 "data_size": 65536 00:07:59.041 }, 00:07:59.041 { 00:07:59.041 "name": "BaseBdev3", 00:07:59.041 "uuid": "d532e55c-64c0-4b6b-a7cc-0b143c7d45ec", 00:07:59.041 "is_configured": true, 00:07:59.041 "data_offset": 0, 00:07:59.041 "data_size": 65536 00:07:59.041 } 00:07:59.041 ] 00:07:59.041 }' 00:07:59.041 11:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.041 11:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.611 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:59.611 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:59.611 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.611 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:59.611 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.611 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.611 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.611 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:59.611 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:59.611 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:59.611 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.611 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.611 [2024-12-06 11:50:57.150279] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:59.611 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.611 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:59.611 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:59.611 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.611 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.611 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.611 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:59.611 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.612 [2024-12-06 11:50:57.209201] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:59.612 [2024-12-06 11:50:57.209317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.612 BaseBdev2 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.612 [ 00:07:59.612 { 00:07:59.612 "name": "BaseBdev2", 00:07:59.612 "aliases": [ 00:07:59.612 "6564a282-e3cc-4636-b04c-d5cbc4ede007" 00:07:59.612 ], 00:07:59.612 "product_name": "Malloc disk", 00:07:59.612 "block_size": 512, 00:07:59.612 "num_blocks": 65536, 00:07:59.612 "uuid": "6564a282-e3cc-4636-b04c-d5cbc4ede007", 00:07:59.612 "assigned_rate_limits": { 00:07:59.612 "rw_ios_per_sec": 0, 00:07:59.612 "rw_mbytes_per_sec": 0, 00:07:59.612 "r_mbytes_per_sec": 0, 00:07:59.612 "w_mbytes_per_sec": 0 00:07:59.612 }, 00:07:59.612 "claimed": false, 00:07:59.612 "zoned": false, 00:07:59.612 "supported_io_types": { 00:07:59.612 "read": true, 00:07:59.612 "write": true, 00:07:59.612 "unmap": true, 00:07:59.612 "flush": true, 00:07:59.612 "reset": true, 00:07:59.612 "nvme_admin": false, 00:07:59.612 "nvme_io": false, 00:07:59.612 "nvme_io_md": false, 00:07:59.612 "write_zeroes": true, 00:07:59.612 "zcopy": true, 00:07:59.612 "get_zone_info": false, 00:07:59.612 "zone_management": false, 00:07:59.612 "zone_append": false, 00:07:59.612 "compare": false, 00:07:59.612 "compare_and_write": false, 00:07:59.612 "abort": true, 00:07:59.612 "seek_hole": false, 00:07:59.612 "seek_data": false, 00:07:59.612 "copy": true, 00:07:59.612 "nvme_iov_md": false 00:07:59.612 }, 00:07:59.612 "memory_domains": [ 00:07:59.612 { 00:07:59.612 "dma_device_id": "system", 00:07:59.612 "dma_device_type": 1 00:07:59.612 }, 00:07:59.612 { 00:07:59.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.612 "dma_device_type": 2 00:07:59.612 } 00:07:59.612 ], 00:07:59.612 "driver_specific": {} 00:07:59.612 } 00:07:59.612 ] 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.612 BaseBdev3 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.612 [ 00:07:59.612 { 00:07:59.612 "name": "BaseBdev3", 00:07:59.612 "aliases": [ 00:07:59.612 "fd3d86fc-7753-46a6-94d9-87f29b578d99" 00:07:59.612 ], 00:07:59.612 "product_name": "Malloc disk", 00:07:59.612 "block_size": 512, 00:07:59.612 "num_blocks": 65536, 00:07:59.612 "uuid": "fd3d86fc-7753-46a6-94d9-87f29b578d99", 00:07:59.612 "assigned_rate_limits": { 00:07:59.612 "rw_ios_per_sec": 0, 00:07:59.612 "rw_mbytes_per_sec": 0, 00:07:59.612 "r_mbytes_per_sec": 0, 00:07:59.612 "w_mbytes_per_sec": 0 00:07:59.612 }, 00:07:59.612 "claimed": false, 00:07:59.612 "zoned": false, 00:07:59.612 "supported_io_types": { 00:07:59.612 "read": true, 00:07:59.612 "write": true, 00:07:59.612 "unmap": true, 00:07:59.612 "flush": true, 00:07:59.612 "reset": true, 00:07:59.612 "nvme_admin": false, 00:07:59.612 "nvme_io": false, 00:07:59.612 "nvme_io_md": false, 00:07:59.612 "write_zeroes": true, 00:07:59.612 "zcopy": true, 00:07:59.612 "get_zone_info": false, 00:07:59.612 "zone_management": false, 00:07:59.612 "zone_append": false, 00:07:59.612 "compare": false, 00:07:59.612 "compare_and_write": false, 00:07:59.612 "abort": true, 00:07:59.612 "seek_hole": false, 00:07:59.612 "seek_data": false, 00:07:59.612 "copy": true, 00:07:59.612 "nvme_iov_md": false 00:07:59.612 }, 00:07:59.612 "memory_domains": [ 00:07:59.612 { 00:07:59.612 "dma_device_id": "system", 00:07:59.612 "dma_device_type": 1 00:07:59.612 }, 00:07:59.612 { 00:07:59.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.612 "dma_device_type": 2 00:07:59.612 } 00:07:59.612 ], 00:07:59.612 "driver_specific": {} 00:07:59.612 } 00:07:59.612 ] 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.612 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.612 [2024-12-06 11:50:57.367551] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:59.873 [2024-12-06 11:50:57.367636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:59.873 [2024-12-06 11:50:57.367680] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:59.873 [2024-12-06 11:50:57.369420] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:59.873 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.873 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:59.873 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.873 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.873 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:59.873 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.873 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:59.873 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.873 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.873 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.873 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.873 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.873 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.873 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.873 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.873 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.873 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.873 "name": "Existed_Raid", 00:07:59.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.873 "strip_size_kb": 64, 00:07:59.873 "state": "configuring", 00:07:59.873 "raid_level": "concat", 00:07:59.873 "superblock": false, 00:07:59.873 "num_base_bdevs": 3, 00:07:59.873 "num_base_bdevs_discovered": 2, 00:07:59.873 "num_base_bdevs_operational": 3, 00:07:59.873 "base_bdevs_list": [ 00:07:59.873 { 00:07:59.873 "name": "BaseBdev1", 00:07:59.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.873 "is_configured": false, 00:07:59.873 "data_offset": 0, 00:07:59.873 "data_size": 0 00:07:59.873 }, 00:07:59.873 { 00:07:59.873 "name": "BaseBdev2", 00:07:59.873 "uuid": "6564a282-e3cc-4636-b04c-d5cbc4ede007", 00:07:59.873 "is_configured": true, 00:07:59.873 "data_offset": 0, 00:07:59.873 "data_size": 65536 00:07:59.873 }, 00:07:59.873 { 00:07:59.873 "name": "BaseBdev3", 00:07:59.873 "uuid": "fd3d86fc-7753-46a6-94d9-87f29b578d99", 00:07:59.873 "is_configured": true, 00:07:59.873 "data_offset": 0, 00:07:59.873 "data_size": 65536 00:07:59.873 } 00:07:59.873 ] 00:07:59.873 }' 00:07:59.873 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.873 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.133 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:00.133 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.133 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.133 [2024-12-06 11:50:57.794881] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:00.133 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.133 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:00.134 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.134 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.134 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:00.134 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.134 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:00.134 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.134 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.134 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.134 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.134 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.134 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.134 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.134 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.134 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.134 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.134 "name": "Existed_Raid", 00:08:00.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.134 "strip_size_kb": 64, 00:08:00.134 "state": "configuring", 00:08:00.134 "raid_level": "concat", 00:08:00.134 "superblock": false, 00:08:00.134 "num_base_bdevs": 3, 00:08:00.134 "num_base_bdevs_discovered": 1, 00:08:00.134 "num_base_bdevs_operational": 3, 00:08:00.134 "base_bdevs_list": [ 00:08:00.134 { 00:08:00.134 "name": "BaseBdev1", 00:08:00.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.134 "is_configured": false, 00:08:00.134 "data_offset": 0, 00:08:00.134 "data_size": 0 00:08:00.134 }, 00:08:00.134 { 00:08:00.134 "name": null, 00:08:00.134 "uuid": "6564a282-e3cc-4636-b04c-d5cbc4ede007", 00:08:00.134 "is_configured": false, 00:08:00.134 "data_offset": 0, 00:08:00.134 "data_size": 65536 00:08:00.134 }, 00:08:00.134 { 00:08:00.134 "name": "BaseBdev3", 00:08:00.134 "uuid": "fd3d86fc-7753-46a6-94d9-87f29b578d99", 00:08:00.134 "is_configured": true, 00:08:00.134 "data_offset": 0, 00:08:00.134 "data_size": 65536 00:08:00.134 } 00:08:00.134 ] 00:08:00.134 }' 00:08:00.134 11:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.134 11:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.704 [2024-12-06 11:50:58.260965] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:00.704 BaseBdev1 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.704 [ 00:08:00.704 { 00:08:00.704 "name": "BaseBdev1", 00:08:00.704 "aliases": [ 00:08:00.704 "b466a2cd-0bfc-4aa7-a525-b0014e8a756a" 00:08:00.704 ], 00:08:00.704 "product_name": "Malloc disk", 00:08:00.704 "block_size": 512, 00:08:00.704 "num_blocks": 65536, 00:08:00.704 "uuid": "b466a2cd-0bfc-4aa7-a525-b0014e8a756a", 00:08:00.704 "assigned_rate_limits": { 00:08:00.704 "rw_ios_per_sec": 0, 00:08:00.704 "rw_mbytes_per_sec": 0, 00:08:00.704 "r_mbytes_per_sec": 0, 00:08:00.704 "w_mbytes_per_sec": 0 00:08:00.704 }, 00:08:00.704 "claimed": true, 00:08:00.704 "claim_type": "exclusive_write", 00:08:00.704 "zoned": false, 00:08:00.704 "supported_io_types": { 00:08:00.704 "read": true, 00:08:00.704 "write": true, 00:08:00.704 "unmap": true, 00:08:00.704 "flush": true, 00:08:00.704 "reset": true, 00:08:00.704 "nvme_admin": false, 00:08:00.704 "nvme_io": false, 00:08:00.704 "nvme_io_md": false, 00:08:00.704 "write_zeroes": true, 00:08:00.704 "zcopy": true, 00:08:00.704 "get_zone_info": false, 00:08:00.704 "zone_management": false, 00:08:00.704 "zone_append": false, 00:08:00.704 "compare": false, 00:08:00.704 "compare_and_write": false, 00:08:00.704 "abort": true, 00:08:00.704 "seek_hole": false, 00:08:00.704 "seek_data": false, 00:08:00.704 "copy": true, 00:08:00.704 "nvme_iov_md": false 00:08:00.704 }, 00:08:00.704 "memory_domains": [ 00:08:00.704 { 00:08:00.704 "dma_device_id": "system", 00:08:00.704 "dma_device_type": 1 00:08:00.704 }, 00:08:00.704 { 00:08:00.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.704 "dma_device_type": 2 00:08:00.704 } 00:08:00.704 ], 00:08:00.704 "driver_specific": {} 00:08:00.704 } 00:08:00.704 ] 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.704 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.704 "name": "Existed_Raid", 00:08:00.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.704 "strip_size_kb": 64, 00:08:00.704 "state": "configuring", 00:08:00.704 "raid_level": "concat", 00:08:00.704 "superblock": false, 00:08:00.704 "num_base_bdevs": 3, 00:08:00.704 "num_base_bdevs_discovered": 2, 00:08:00.704 "num_base_bdevs_operational": 3, 00:08:00.704 "base_bdevs_list": [ 00:08:00.704 { 00:08:00.704 "name": "BaseBdev1", 00:08:00.704 "uuid": "b466a2cd-0bfc-4aa7-a525-b0014e8a756a", 00:08:00.705 "is_configured": true, 00:08:00.705 "data_offset": 0, 00:08:00.705 "data_size": 65536 00:08:00.705 }, 00:08:00.705 { 00:08:00.705 "name": null, 00:08:00.705 "uuid": "6564a282-e3cc-4636-b04c-d5cbc4ede007", 00:08:00.705 "is_configured": false, 00:08:00.705 "data_offset": 0, 00:08:00.705 "data_size": 65536 00:08:00.705 }, 00:08:00.705 { 00:08:00.705 "name": "BaseBdev3", 00:08:00.705 "uuid": "fd3d86fc-7753-46a6-94d9-87f29b578d99", 00:08:00.705 "is_configured": true, 00:08:00.705 "data_offset": 0, 00:08:00.705 "data_size": 65536 00:08:00.705 } 00:08:00.705 ] 00:08:00.705 }' 00:08:00.705 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.705 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.964 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.964 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:00.964 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.964 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.224 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.224 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:01.224 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:01.224 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.224 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.224 [2024-12-06 11:50:58.768169] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:01.224 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.224 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:01.224 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.224 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.224 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:01.224 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.224 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.224 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.224 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.224 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.224 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.224 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.224 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.224 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.224 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.224 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.224 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.224 "name": "Existed_Raid", 00:08:01.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.224 "strip_size_kb": 64, 00:08:01.224 "state": "configuring", 00:08:01.224 "raid_level": "concat", 00:08:01.224 "superblock": false, 00:08:01.224 "num_base_bdevs": 3, 00:08:01.224 "num_base_bdevs_discovered": 1, 00:08:01.224 "num_base_bdevs_operational": 3, 00:08:01.224 "base_bdevs_list": [ 00:08:01.224 { 00:08:01.224 "name": "BaseBdev1", 00:08:01.224 "uuid": "b466a2cd-0bfc-4aa7-a525-b0014e8a756a", 00:08:01.224 "is_configured": true, 00:08:01.224 "data_offset": 0, 00:08:01.224 "data_size": 65536 00:08:01.224 }, 00:08:01.224 { 00:08:01.224 "name": null, 00:08:01.224 "uuid": "6564a282-e3cc-4636-b04c-d5cbc4ede007", 00:08:01.224 "is_configured": false, 00:08:01.224 "data_offset": 0, 00:08:01.224 "data_size": 65536 00:08:01.224 }, 00:08:01.224 { 00:08:01.224 "name": null, 00:08:01.224 "uuid": "fd3d86fc-7753-46a6-94d9-87f29b578d99", 00:08:01.224 "is_configured": false, 00:08:01.224 "data_offset": 0, 00:08:01.224 "data_size": 65536 00:08:01.224 } 00:08:01.224 ] 00:08:01.224 }' 00:08:01.224 11:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.224 11:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.484 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.484 11:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.484 11:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.484 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:01.484 11:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.484 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:01.484 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:01.484 11:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.484 11:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.484 [2024-12-06 11:50:59.219382] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:01.484 11:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.484 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:01.484 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.484 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.484 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:01.484 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.484 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.484 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.484 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.484 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.484 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.484 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.484 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.484 11:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.484 11:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.745 11:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.745 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.745 "name": "Existed_Raid", 00:08:01.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.745 "strip_size_kb": 64, 00:08:01.745 "state": "configuring", 00:08:01.745 "raid_level": "concat", 00:08:01.745 "superblock": false, 00:08:01.745 "num_base_bdevs": 3, 00:08:01.745 "num_base_bdevs_discovered": 2, 00:08:01.745 "num_base_bdevs_operational": 3, 00:08:01.745 "base_bdevs_list": [ 00:08:01.745 { 00:08:01.745 "name": "BaseBdev1", 00:08:01.745 "uuid": "b466a2cd-0bfc-4aa7-a525-b0014e8a756a", 00:08:01.745 "is_configured": true, 00:08:01.745 "data_offset": 0, 00:08:01.745 "data_size": 65536 00:08:01.745 }, 00:08:01.745 { 00:08:01.745 "name": null, 00:08:01.745 "uuid": "6564a282-e3cc-4636-b04c-d5cbc4ede007", 00:08:01.745 "is_configured": false, 00:08:01.745 "data_offset": 0, 00:08:01.745 "data_size": 65536 00:08:01.745 }, 00:08:01.745 { 00:08:01.745 "name": "BaseBdev3", 00:08:01.745 "uuid": "fd3d86fc-7753-46a6-94d9-87f29b578d99", 00:08:01.745 "is_configured": true, 00:08:01.745 "data_offset": 0, 00:08:01.745 "data_size": 65536 00:08:01.745 } 00:08:01.745 ] 00:08:01.745 }' 00:08:01.745 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.745 11:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.005 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.005 11:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.005 11:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.005 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:02.005 11:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.005 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:02.005 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:02.005 11:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.005 11:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.005 [2024-12-06 11:50:59.718685] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:02.005 11:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.005 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:02.005 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.005 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.005 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:02.005 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.005 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:02.005 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.005 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.005 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.005 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.005 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.005 11:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.005 11:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.005 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.005 11:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.266 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.266 "name": "Existed_Raid", 00:08:02.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.266 "strip_size_kb": 64, 00:08:02.266 "state": "configuring", 00:08:02.266 "raid_level": "concat", 00:08:02.266 "superblock": false, 00:08:02.266 "num_base_bdevs": 3, 00:08:02.266 "num_base_bdevs_discovered": 1, 00:08:02.266 "num_base_bdevs_operational": 3, 00:08:02.266 "base_bdevs_list": [ 00:08:02.266 { 00:08:02.266 "name": null, 00:08:02.266 "uuid": "b466a2cd-0bfc-4aa7-a525-b0014e8a756a", 00:08:02.266 "is_configured": false, 00:08:02.266 "data_offset": 0, 00:08:02.266 "data_size": 65536 00:08:02.266 }, 00:08:02.266 { 00:08:02.266 "name": null, 00:08:02.266 "uuid": "6564a282-e3cc-4636-b04c-d5cbc4ede007", 00:08:02.266 "is_configured": false, 00:08:02.266 "data_offset": 0, 00:08:02.266 "data_size": 65536 00:08:02.266 }, 00:08:02.266 { 00:08:02.266 "name": "BaseBdev3", 00:08:02.266 "uuid": "fd3d86fc-7753-46a6-94d9-87f29b578d99", 00:08:02.266 "is_configured": true, 00:08:02.266 "data_offset": 0, 00:08:02.266 "data_size": 65536 00:08:02.266 } 00:08:02.266 ] 00:08:02.266 }' 00:08:02.266 11:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.266 11:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.525 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:02.525 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.525 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.525 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.525 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.525 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:02.525 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:02.525 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.525 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.525 [2024-12-06 11:51:00.208258] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:02.525 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.525 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:02.526 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.526 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.526 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:02.526 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.526 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:02.526 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.526 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.526 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.526 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.526 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.526 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.526 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.526 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.526 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.526 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.526 "name": "Existed_Raid", 00:08:02.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.526 "strip_size_kb": 64, 00:08:02.526 "state": "configuring", 00:08:02.526 "raid_level": "concat", 00:08:02.526 "superblock": false, 00:08:02.526 "num_base_bdevs": 3, 00:08:02.526 "num_base_bdevs_discovered": 2, 00:08:02.526 "num_base_bdevs_operational": 3, 00:08:02.526 "base_bdevs_list": [ 00:08:02.526 { 00:08:02.526 "name": null, 00:08:02.526 "uuid": "b466a2cd-0bfc-4aa7-a525-b0014e8a756a", 00:08:02.526 "is_configured": false, 00:08:02.526 "data_offset": 0, 00:08:02.526 "data_size": 65536 00:08:02.526 }, 00:08:02.526 { 00:08:02.526 "name": "BaseBdev2", 00:08:02.526 "uuid": "6564a282-e3cc-4636-b04c-d5cbc4ede007", 00:08:02.526 "is_configured": true, 00:08:02.526 "data_offset": 0, 00:08:02.526 "data_size": 65536 00:08:02.526 }, 00:08:02.526 { 00:08:02.526 "name": "BaseBdev3", 00:08:02.526 "uuid": "fd3d86fc-7753-46a6-94d9-87f29b578d99", 00:08:02.526 "is_configured": true, 00:08:02.526 "data_offset": 0, 00:08:02.526 "data_size": 65536 00:08:02.526 } 00:08:02.526 ] 00:08:02.526 }' 00:08:02.526 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.526 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b466a2cd-0bfc-4aa7-a525-b0014e8a756a 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.095 [2024-12-06 11:51:00.690175] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:03.095 [2024-12-06 11:51:00.690219] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:03.095 [2024-12-06 11:51:00.690229] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:03.095 [2024-12-06 11:51:00.690481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:03.095 [2024-12-06 11:51:00.690601] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:03.095 [2024-12-06 11:51:00.690610] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:03.095 [2024-12-06 11:51:00.690778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.095 NewBaseBdev 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.095 [ 00:08:03.095 { 00:08:03.095 "name": "NewBaseBdev", 00:08:03.095 "aliases": [ 00:08:03.095 "b466a2cd-0bfc-4aa7-a525-b0014e8a756a" 00:08:03.095 ], 00:08:03.095 "product_name": "Malloc disk", 00:08:03.095 "block_size": 512, 00:08:03.095 "num_blocks": 65536, 00:08:03.095 "uuid": "b466a2cd-0bfc-4aa7-a525-b0014e8a756a", 00:08:03.095 "assigned_rate_limits": { 00:08:03.095 "rw_ios_per_sec": 0, 00:08:03.095 "rw_mbytes_per_sec": 0, 00:08:03.095 "r_mbytes_per_sec": 0, 00:08:03.095 "w_mbytes_per_sec": 0 00:08:03.095 }, 00:08:03.095 "claimed": true, 00:08:03.095 "claim_type": "exclusive_write", 00:08:03.095 "zoned": false, 00:08:03.095 "supported_io_types": { 00:08:03.095 "read": true, 00:08:03.095 "write": true, 00:08:03.095 "unmap": true, 00:08:03.095 "flush": true, 00:08:03.095 "reset": true, 00:08:03.095 "nvme_admin": false, 00:08:03.095 "nvme_io": false, 00:08:03.095 "nvme_io_md": false, 00:08:03.095 "write_zeroes": true, 00:08:03.095 "zcopy": true, 00:08:03.095 "get_zone_info": false, 00:08:03.095 "zone_management": false, 00:08:03.095 "zone_append": false, 00:08:03.095 "compare": false, 00:08:03.095 "compare_and_write": false, 00:08:03.095 "abort": true, 00:08:03.095 "seek_hole": false, 00:08:03.095 "seek_data": false, 00:08:03.095 "copy": true, 00:08:03.095 "nvme_iov_md": false 00:08:03.095 }, 00:08:03.095 "memory_domains": [ 00:08:03.095 { 00:08:03.095 "dma_device_id": "system", 00:08:03.095 "dma_device_type": 1 00:08:03.095 }, 00:08:03.095 { 00:08:03.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.095 "dma_device_type": 2 00:08:03.095 } 00:08:03.095 ], 00:08:03.095 "driver_specific": {} 00:08:03.095 } 00:08:03.095 ] 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.095 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.095 "name": "Existed_Raid", 00:08:03.095 "uuid": "0959a4f7-9d01-499e-b52b-54e5aa0fded9", 00:08:03.095 "strip_size_kb": 64, 00:08:03.095 "state": "online", 00:08:03.095 "raid_level": "concat", 00:08:03.095 "superblock": false, 00:08:03.095 "num_base_bdevs": 3, 00:08:03.095 "num_base_bdevs_discovered": 3, 00:08:03.095 "num_base_bdevs_operational": 3, 00:08:03.095 "base_bdevs_list": [ 00:08:03.095 { 00:08:03.095 "name": "NewBaseBdev", 00:08:03.095 "uuid": "b466a2cd-0bfc-4aa7-a525-b0014e8a756a", 00:08:03.095 "is_configured": true, 00:08:03.095 "data_offset": 0, 00:08:03.095 "data_size": 65536 00:08:03.095 }, 00:08:03.095 { 00:08:03.095 "name": "BaseBdev2", 00:08:03.095 "uuid": "6564a282-e3cc-4636-b04c-d5cbc4ede007", 00:08:03.095 "is_configured": true, 00:08:03.095 "data_offset": 0, 00:08:03.095 "data_size": 65536 00:08:03.095 }, 00:08:03.095 { 00:08:03.095 "name": "BaseBdev3", 00:08:03.095 "uuid": "fd3d86fc-7753-46a6-94d9-87f29b578d99", 00:08:03.096 "is_configured": true, 00:08:03.096 "data_offset": 0, 00:08:03.096 "data_size": 65536 00:08:03.096 } 00:08:03.096 ] 00:08:03.096 }' 00:08:03.096 11:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.096 11:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.663 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:03.663 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:03.663 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:03.663 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:03.663 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:03.663 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:03.663 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:03.663 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:03.663 11:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.663 11:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.663 [2024-12-06 11:51:01.145669] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:03.663 11:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.663 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:03.663 "name": "Existed_Raid", 00:08:03.663 "aliases": [ 00:08:03.663 "0959a4f7-9d01-499e-b52b-54e5aa0fded9" 00:08:03.663 ], 00:08:03.663 "product_name": "Raid Volume", 00:08:03.663 "block_size": 512, 00:08:03.663 "num_blocks": 196608, 00:08:03.663 "uuid": "0959a4f7-9d01-499e-b52b-54e5aa0fded9", 00:08:03.663 "assigned_rate_limits": { 00:08:03.663 "rw_ios_per_sec": 0, 00:08:03.663 "rw_mbytes_per_sec": 0, 00:08:03.663 "r_mbytes_per_sec": 0, 00:08:03.663 "w_mbytes_per_sec": 0 00:08:03.663 }, 00:08:03.663 "claimed": false, 00:08:03.663 "zoned": false, 00:08:03.663 "supported_io_types": { 00:08:03.663 "read": true, 00:08:03.663 "write": true, 00:08:03.663 "unmap": true, 00:08:03.663 "flush": true, 00:08:03.663 "reset": true, 00:08:03.663 "nvme_admin": false, 00:08:03.663 "nvme_io": false, 00:08:03.663 "nvme_io_md": false, 00:08:03.663 "write_zeroes": true, 00:08:03.663 "zcopy": false, 00:08:03.663 "get_zone_info": false, 00:08:03.663 "zone_management": false, 00:08:03.663 "zone_append": false, 00:08:03.663 "compare": false, 00:08:03.663 "compare_and_write": false, 00:08:03.663 "abort": false, 00:08:03.663 "seek_hole": false, 00:08:03.663 "seek_data": false, 00:08:03.663 "copy": false, 00:08:03.663 "nvme_iov_md": false 00:08:03.663 }, 00:08:03.663 "memory_domains": [ 00:08:03.663 { 00:08:03.663 "dma_device_id": "system", 00:08:03.663 "dma_device_type": 1 00:08:03.663 }, 00:08:03.663 { 00:08:03.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.663 "dma_device_type": 2 00:08:03.663 }, 00:08:03.663 { 00:08:03.663 "dma_device_id": "system", 00:08:03.663 "dma_device_type": 1 00:08:03.663 }, 00:08:03.663 { 00:08:03.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.663 "dma_device_type": 2 00:08:03.663 }, 00:08:03.663 { 00:08:03.663 "dma_device_id": "system", 00:08:03.663 "dma_device_type": 1 00:08:03.663 }, 00:08:03.663 { 00:08:03.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.663 "dma_device_type": 2 00:08:03.663 } 00:08:03.663 ], 00:08:03.663 "driver_specific": { 00:08:03.663 "raid": { 00:08:03.663 "uuid": "0959a4f7-9d01-499e-b52b-54e5aa0fded9", 00:08:03.663 "strip_size_kb": 64, 00:08:03.663 "state": "online", 00:08:03.663 "raid_level": "concat", 00:08:03.663 "superblock": false, 00:08:03.663 "num_base_bdevs": 3, 00:08:03.663 "num_base_bdevs_discovered": 3, 00:08:03.663 "num_base_bdevs_operational": 3, 00:08:03.663 "base_bdevs_list": [ 00:08:03.663 { 00:08:03.663 "name": "NewBaseBdev", 00:08:03.663 "uuid": "b466a2cd-0bfc-4aa7-a525-b0014e8a756a", 00:08:03.663 "is_configured": true, 00:08:03.663 "data_offset": 0, 00:08:03.663 "data_size": 65536 00:08:03.663 }, 00:08:03.663 { 00:08:03.663 "name": "BaseBdev2", 00:08:03.663 "uuid": "6564a282-e3cc-4636-b04c-d5cbc4ede007", 00:08:03.663 "is_configured": true, 00:08:03.663 "data_offset": 0, 00:08:03.663 "data_size": 65536 00:08:03.663 }, 00:08:03.663 { 00:08:03.663 "name": "BaseBdev3", 00:08:03.663 "uuid": "fd3d86fc-7753-46a6-94d9-87f29b578d99", 00:08:03.663 "is_configured": true, 00:08:03.663 "data_offset": 0, 00:08:03.663 "data_size": 65536 00:08:03.663 } 00:08:03.663 ] 00:08:03.663 } 00:08:03.663 } 00:08:03.663 }' 00:08:03.663 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:03.663 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:03.663 BaseBdev2 00:08:03.663 BaseBdev3' 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.664 [2024-12-06 11:51:01.400956] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:03.664 [2024-12-06 11:51:01.400984] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:03.664 [2024-12-06 11:51:01.401052] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:03.664 [2024-12-06 11:51:01.401100] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:03.664 [2024-12-06 11:51:01.401112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 76426 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 76426 ']' 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 76426 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:03.664 11:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76426 00:08:03.923 killing process with pid 76426 00:08:03.923 11:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:03.923 11:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:03.923 11:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76426' 00:08:03.923 11:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 76426 00:08:03.923 [2024-12-06 11:51:01.448929] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:03.923 11:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 76426 00:08:03.923 [2024-12-06 11:51:01.479424] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:04.182 ************************************ 00:08:04.182 END TEST raid_state_function_test 00:08:04.182 ************************************ 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:04.182 00:08:04.182 real 0m8.472s 00:08:04.182 user 0m14.431s 00:08:04.182 sys 0m1.669s 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.182 11:51:01 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:04.182 11:51:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:04.182 11:51:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.182 11:51:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:04.182 ************************************ 00:08:04.182 START TEST raid_state_function_test_sb 00:08:04.182 ************************************ 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77031 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77031' 00:08:04.182 Process raid pid: 77031 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77031 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 77031 ']' 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:04.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:04.182 11:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.182 [2024-12-06 11:51:01.879685] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:04.182 [2024-12-06 11:51:01.879801] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.441 [2024-12-06 11:51:02.023926] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.441 [2024-12-06 11:51:02.067409] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.441 [2024-12-06 11:51:02.108815] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.441 [2024-12-06 11:51:02.108850] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.008 11:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:05.008 11:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:05.008 11:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:05.008 11:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.008 11:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.008 [2024-12-06 11:51:02.709401] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:05.008 [2024-12-06 11:51:02.709446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:05.008 [2024-12-06 11:51:02.709464] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.008 [2024-12-06 11:51:02.709475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.008 [2024-12-06 11:51:02.709481] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:05.008 [2024-12-06 11:51:02.709492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:05.008 11:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.008 11:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:05.008 11:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.009 11:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.009 11:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:05.009 11:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.009 11:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.009 11:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.009 11:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.009 11:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.009 11:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.009 11:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.009 11:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.009 11:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.009 11:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.009 11:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.009 11:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.009 "name": "Existed_Raid", 00:08:05.009 "uuid": "20f8bfae-2e6a-4c77-a8e8-c5c95e8fe2d7", 00:08:05.009 "strip_size_kb": 64, 00:08:05.009 "state": "configuring", 00:08:05.009 "raid_level": "concat", 00:08:05.009 "superblock": true, 00:08:05.009 "num_base_bdevs": 3, 00:08:05.009 "num_base_bdevs_discovered": 0, 00:08:05.009 "num_base_bdevs_operational": 3, 00:08:05.009 "base_bdevs_list": [ 00:08:05.009 { 00:08:05.009 "name": "BaseBdev1", 00:08:05.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.009 "is_configured": false, 00:08:05.009 "data_offset": 0, 00:08:05.009 "data_size": 0 00:08:05.009 }, 00:08:05.009 { 00:08:05.009 "name": "BaseBdev2", 00:08:05.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.009 "is_configured": false, 00:08:05.009 "data_offset": 0, 00:08:05.009 "data_size": 0 00:08:05.009 }, 00:08:05.009 { 00:08:05.009 "name": "BaseBdev3", 00:08:05.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.009 "is_configured": false, 00:08:05.009 "data_offset": 0, 00:08:05.009 "data_size": 0 00:08:05.009 } 00:08:05.009 ] 00:08:05.009 }' 00:08:05.009 11:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.009 11:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.577 [2024-12-06 11:51:03.136531] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:05.577 [2024-12-06 11:51:03.136590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.577 [2024-12-06 11:51:03.148559] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:05.577 [2024-12-06 11:51:03.148599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:05.577 [2024-12-06 11:51:03.148607] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.577 [2024-12-06 11:51:03.148616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.577 [2024-12-06 11:51:03.148622] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:05.577 [2024-12-06 11:51:03.148630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.577 [2024-12-06 11:51:03.169183] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:05.577 BaseBdev1 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.577 [ 00:08:05.577 { 00:08:05.577 "name": "BaseBdev1", 00:08:05.577 "aliases": [ 00:08:05.577 "a3e13687-47c5-44a1-952d-ec487c7d20a5" 00:08:05.577 ], 00:08:05.577 "product_name": "Malloc disk", 00:08:05.577 "block_size": 512, 00:08:05.577 "num_blocks": 65536, 00:08:05.577 "uuid": "a3e13687-47c5-44a1-952d-ec487c7d20a5", 00:08:05.577 "assigned_rate_limits": { 00:08:05.577 "rw_ios_per_sec": 0, 00:08:05.577 "rw_mbytes_per_sec": 0, 00:08:05.577 "r_mbytes_per_sec": 0, 00:08:05.577 "w_mbytes_per_sec": 0 00:08:05.577 }, 00:08:05.577 "claimed": true, 00:08:05.577 "claim_type": "exclusive_write", 00:08:05.577 "zoned": false, 00:08:05.577 "supported_io_types": { 00:08:05.577 "read": true, 00:08:05.577 "write": true, 00:08:05.577 "unmap": true, 00:08:05.577 "flush": true, 00:08:05.577 "reset": true, 00:08:05.577 "nvme_admin": false, 00:08:05.577 "nvme_io": false, 00:08:05.577 "nvme_io_md": false, 00:08:05.577 "write_zeroes": true, 00:08:05.577 "zcopy": true, 00:08:05.577 "get_zone_info": false, 00:08:05.577 "zone_management": false, 00:08:05.577 "zone_append": false, 00:08:05.577 "compare": false, 00:08:05.577 "compare_and_write": false, 00:08:05.577 "abort": true, 00:08:05.577 "seek_hole": false, 00:08:05.577 "seek_data": false, 00:08:05.577 "copy": true, 00:08:05.577 "nvme_iov_md": false 00:08:05.577 }, 00:08:05.577 "memory_domains": [ 00:08:05.577 { 00:08:05.577 "dma_device_id": "system", 00:08:05.577 "dma_device_type": 1 00:08:05.577 }, 00:08:05.577 { 00:08:05.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.577 "dma_device_type": 2 00:08:05.577 } 00:08:05.577 ], 00:08:05.577 "driver_specific": {} 00:08:05.577 } 00:08:05.577 ] 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.577 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.578 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.578 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.578 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.578 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.578 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.578 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.578 "name": "Existed_Raid", 00:08:05.578 "uuid": "11e4de1e-63e6-4452-bc6f-5f5210c53469", 00:08:05.578 "strip_size_kb": 64, 00:08:05.578 "state": "configuring", 00:08:05.578 "raid_level": "concat", 00:08:05.578 "superblock": true, 00:08:05.578 "num_base_bdevs": 3, 00:08:05.578 "num_base_bdevs_discovered": 1, 00:08:05.578 "num_base_bdevs_operational": 3, 00:08:05.578 "base_bdevs_list": [ 00:08:05.578 { 00:08:05.578 "name": "BaseBdev1", 00:08:05.578 "uuid": "a3e13687-47c5-44a1-952d-ec487c7d20a5", 00:08:05.578 "is_configured": true, 00:08:05.578 "data_offset": 2048, 00:08:05.578 "data_size": 63488 00:08:05.578 }, 00:08:05.578 { 00:08:05.578 "name": "BaseBdev2", 00:08:05.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.578 "is_configured": false, 00:08:05.578 "data_offset": 0, 00:08:05.578 "data_size": 0 00:08:05.578 }, 00:08:05.578 { 00:08:05.578 "name": "BaseBdev3", 00:08:05.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.578 "is_configured": false, 00:08:05.578 "data_offset": 0, 00:08:05.578 "data_size": 0 00:08:05.578 } 00:08:05.578 ] 00:08:05.578 }' 00:08:05.578 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.578 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.144 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:06.144 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.144 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.144 [2024-12-06 11:51:03.632409] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:06.144 [2024-12-06 11:51:03.632448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:06.145 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.145 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:06.145 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.145 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.145 [2024-12-06 11:51:03.644441] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:06.145 [2024-12-06 11:51:03.646263] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:06.145 [2024-12-06 11:51:03.646300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:06.145 [2024-12-06 11:51:03.646310] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:06.145 [2024-12-06 11:51:03.646319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:06.145 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.145 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:06.145 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:06.145 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:06.145 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.145 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.145 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:06.145 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.145 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:06.145 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.145 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.145 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.145 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.145 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.145 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.145 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.145 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.145 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.145 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.145 "name": "Existed_Raid", 00:08:06.145 "uuid": "ed31bc73-0bc5-43bd-a49f-e160a7c93a02", 00:08:06.145 "strip_size_kb": 64, 00:08:06.145 "state": "configuring", 00:08:06.145 "raid_level": "concat", 00:08:06.145 "superblock": true, 00:08:06.145 "num_base_bdevs": 3, 00:08:06.145 "num_base_bdevs_discovered": 1, 00:08:06.145 "num_base_bdevs_operational": 3, 00:08:06.145 "base_bdevs_list": [ 00:08:06.145 { 00:08:06.145 "name": "BaseBdev1", 00:08:06.145 "uuid": "a3e13687-47c5-44a1-952d-ec487c7d20a5", 00:08:06.145 "is_configured": true, 00:08:06.145 "data_offset": 2048, 00:08:06.145 "data_size": 63488 00:08:06.145 }, 00:08:06.145 { 00:08:06.145 "name": "BaseBdev2", 00:08:06.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.145 "is_configured": false, 00:08:06.145 "data_offset": 0, 00:08:06.145 "data_size": 0 00:08:06.145 }, 00:08:06.145 { 00:08:06.145 "name": "BaseBdev3", 00:08:06.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.145 "is_configured": false, 00:08:06.145 "data_offset": 0, 00:08:06.145 "data_size": 0 00:08:06.145 } 00:08:06.145 ] 00:08:06.145 }' 00:08:06.145 11:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.145 11:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.404 BaseBdev2 00:08:06.404 [2024-12-06 11:51:04.093945] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.404 [ 00:08:06.404 { 00:08:06.404 "name": "BaseBdev2", 00:08:06.404 "aliases": [ 00:08:06.404 "15f4f3a2-34b6-44f0-b2c5-c7726867b677" 00:08:06.404 ], 00:08:06.404 "product_name": "Malloc disk", 00:08:06.404 "block_size": 512, 00:08:06.404 "num_blocks": 65536, 00:08:06.404 "uuid": "15f4f3a2-34b6-44f0-b2c5-c7726867b677", 00:08:06.404 "assigned_rate_limits": { 00:08:06.404 "rw_ios_per_sec": 0, 00:08:06.404 "rw_mbytes_per_sec": 0, 00:08:06.404 "r_mbytes_per_sec": 0, 00:08:06.404 "w_mbytes_per_sec": 0 00:08:06.404 }, 00:08:06.404 "claimed": true, 00:08:06.404 "claim_type": "exclusive_write", 00:08:06.404 "zoned": false, 00:08:06.404 "supported_io_types": { 00:08:06.404 "read": true, 00:08:06.404 "write": true, 00:08:06.404 "unmap": true, 00:08:06.404 "flush": true, 00:08:06.404 "reset": true, 00:08:06.404 "nvme_admin": false, 00:08:06.404 "nvme_io": false, 00:08:06.404 "nvme_io_md": false, 00:08:06.404 "write_zeroes": true, 00:08:06.404 "zcopy": true, 00:08:06.404 "get_zone_info": false, 00:08:06.404 "zone_management": false, 00:08:06.404 "zone_append": false, 00:08:06.404 "compare": false, 00:08:06.404 "compare_and_write": false, 00:08:06.404 "abort": true, 00:08:06.404 "seek_hole": false, 00:08:06.404 "seek_data": false, 00:08:06.404 "copy": true, 00:08:06.404 "nvme_iov_md": false 00:08:06.404 }, 00:08:06.404 "memory_domains": [ 00:08:06.404 { 00:08:06.404 "dma_device_id": "system", 00:08:06.404 "dma_device_type": 1 00:08:06.404 }, 00:08:06.404 { 00:08:06.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.404 "dma_device_type": 2 00:08:06.404 } 00:08:06.404 ], 00:08:06.404 "driver_specific": {} 00:08:06.404 } 00:08:06.404 ] 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.404 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.663 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.663 "name": "Existed_Raid", 00:08:06.663 "uuid": "ed31bc73-0bc5-43bd-a49f-e160a7c93a02", 00:08:06.663 "strip_size_kb": 64, 00:08:06.663 "state": "configuring", 00:08:06.663 "raid_level": "concat", 00:08:06.663 "superblock": true, 00:08:06.663 "num_base_bdevs": 3, 00:08:06.663 "num_base_bdevs_discovered": 2, 00:08:06.663 "num_base_bdevs_operational": 3, 00:08:06.663 "base_bdevs_list": [ 00:08:06.663 { 00:08:06.663 "name": "BaseBdev1", 00:08:06.663 "uuid": "a3e13687-47c5-44a1-952d-ec487c7d20a5", 00:08:06.663 "is_configured": true, 00:08:06.663 "data_offset": 2048, 00:08:06.663 "data_size": 63488 00:08:06.663 }, 00:08:06.663 { 00:08:06.663 "name": "BaseBdev2", 00:08:06.663 "uuid": "15f4f3a2-34b6-44f0-b2c5-c7726867b677", 00:08:06.663 "is_configured": true, 00:08:06.663 "data_offset": 2048, 00:08:06.663 "data_size": 63488 00:08:06.663 }, 00:08:06.663 { 00:08:06.663 "name": "BaseBdev3", 00:08:06.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.663 "is_configured": false, 00:08:06.663 "data_offset": 0, 00:08:06.663 "data_size": 0 00:08:06.663 } 00:08:06.663 ] 00:08:06.663 }' 00:08:06.663 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.663 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.924 [2024-12-06 11:51:04.571888] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:06.924 [2024-12-06 11:51:04.572064] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:06.924 [2024-12-06 11:51:04.572080] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:06.924 [2024-12-06 11:51:04.572389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:06.924 [2024-12-06 11:51:04.572524] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:06.924 [2024-12-06 11:51:04.572541] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:06.924 BaseBdev3 00:08:06.924 [2024-12-06 11:51:04.572670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.924 [ 00:08:06.924 { 00:08:06.924 "name": "BaseBdev3", 00:08:06.924 "aliases": [ 00:08:06.924 "91a03200-c4cc-490d-a6a5-964fdd887628" 00:08:06.924 ], 00:08:06.924 "product_name": "Malloc disk", 00:08:06.924 "block_size": 512, 00:08:06.924 "num_blocks": 65536, 00:08:06.924 "uuid": "91a03200-c4cc-490d-a6a5-964fdd887628", 00:08:06.924 "assigned_rate_limits": { 00:08:06.924 "rw_ios_per_sec": 0, 00:08:06.924 "rw_mbytes_per_sec": 0, 00:08:06.924 "r_mbytes_per_sec": 0, 00:08:06.924 "w_mbytes_per_sec": 0 00:08:06.924 }, 00:08:06.924 "claimed": true, 00:08:06.924 "claim_type": "exclusive_write", 00:08:06.924 "zoned": false, 00:08:06.924 "supported_io_types": { 00:08:06.924 "read": true, 00:08:06.924 "write": true, 00:08:06.924 "unmap": true, 00:08:06.924 "flush": true, 00:08:06.924 "reset": true, 00:08:06.924 "nvme_admin": false, 00:08:06.924 "nvme_io": false, 00:08:06.924 "nvme_io_md": false, 00:08:06.924 "write_zeroes": true, 00:08:06.924 "zcopy": true, 00:08:06.924 "get_zone_info": false, 00:08:06.924 "zone_management": false, 00:08:06.924 "zone_append": false, 00:08:06.924 "compare": false, 00:08:06.924 "compare_and_write": false, 00:08:06.924 "abort": true, 00:08:06.924 "seek_hole": false, 00:08:06.924 "seek_data": false, 00:08:06.924 "copy": true, 00:08:06.924 "nvme_iov_md": false 00:08:06.924 }, 00:08:06.924 "memory_domains": [ 00:08:06.924 { 00:08:06.924 "dma_device_id": "system", 00:08:06.924 "dma_device_type": 1 00:08:06.924 }, 00:08:06.924 { 00:08:06.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.924 "dma_device_type": 2 00:08:06.924 } 00:08:06.924 ], 00:08:06.924 "driver_specific": {} 00:08:06.924 } 00:08:06.924 ] 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.924 "name": "Existed_Raid", 00:08:06.924 "uuid": "ed31bc73-0bc5-43bd-a49f-e160a7c93a02", 00:08:06.924 "strip_size_kb": 64, 00:08:06.924 "state": "online", 00:08:06.924 "raid_level": "concat", 00:08:06.924 "superblock": true, 00:08:06.924 "num_base_bdevs": 3, 00:08:06.924 "num_base_bdevs_discovered": 3, 00:08:06.924 "num_base_bdevs_operational": 3, 00:08:06.924 "base_bdevs_list": [ 00:08:06.924 { 00:08:06.924 "name": "BaseBdev1", 00:08:06.924 "uuid": "a3e13687-47c5-44a1-952d-ec487c7d20a5", 00:08:06.924 "is_configured": true, 00:08:06.924 "data_offset": 2048, 00:08:06.924 "data_size": 63488 00:08:06.924 }, 00:08:06.924 { 00:08:06.924 "name": "BaseBdev2", 00:08:06.924 "uuid": "15f4f3a2-34b6-44f0-b2c5-c7726867b677", 00:08:06.924 "is_configured": true, 00:08:06.924 "data_offset": 2048, 00:08:06.924 "data_size": 63488 00:08:06.924 }, 00:08:06.924 { 00:08:06.924 "name": "BaseBdev3", 00:08:06.924 "uuid": "91a03200-c4cc-490d-a6a5-964fdd887628", 00:08:06.924 "is_configured": true, 00:08:06.924 "data_offset": 2048, 00:08:06.924 "data_size": 63488 00:08:06.924 } 00:08:06.924 ] 00:08:06.924 }' 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.924 11:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:07.493 [2024-12-06 11:51:05.051446] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:07.493 "name": "Existed_Raid", 00:08:07.493 "aliases": [ 00:08:07.493 "ed31bc73-0bc5-43bd-a49f-e160a7c93a02" 00:08:07.493 ], 00:08:07.493 "product_name": "Raid Volume", 00:08:07.493 "block_size": 512, 00:08:07.493 "num_blocks": 190464, 00:08:07.493 "uuid": "ed31bc73-0bc5-43bd-a49f-e160a7c93a02", 00:08:07.493 "assigned_rate_limits": { 00:08:07.493 "rw_ios_per_sec": 0, 00:08:07.493 "rw_mbytes_per_sec": 0, 00:08:07.493 "r_mbytes_per_sec": 0, 00:08:07.493 "w_mbytes_per_sec": 0 00:08:07.493 }, 00:08:07.493 "claimed": false, 00:08:07.493 "zoned": false, 00:08:07.493 "supported_io_types": { 00:08:07.493 "read": true, 00:08:07.493 "write": true, 00:08:07.493 "unmap": true, 00:08:07.493 "flush": true, 00:08:07.493 "reset": true, 00:08:07.493 "nvme_admin": false, 00:08:07.493 "nvme_io": false, 00:08:07.493 "nvme_io_md": false, 00:08:07.493 "write_zeroes": true, 00:08:07.493 "zcopy": false, 00:08:07.493 "get_zone_info": false, 00:08:07.493 "zone_management": false, 00:08:07.493 "zone_append": false, 00:08:07.493 "compare": false, 00:08:07.493 "compare_and_write": false, 00:08:07.493 "abort": false, 00:08:07.493 "seek_hole": false, 00:08:07.493 "seek_data": false, 00:08:07.493 "copy": false, 00:08:07.493 "nvme_iov_md": false 00:08:07.493 }, 00:08:07.493 "memory_domains": [ 00:08:07.493 { 00:08:07.493 "dma_device_id": "system", 00:08:07.493 "dma_device_type": 1 00:08:07.493 }, 00:08:07.493 { 00:08:07.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.493 "dma_device_type": 2 00:08:07.493 }, 00:08:07.493 { 00:08:07.493 "dma_device_id": "system", 00:08:07.493 "dma_device_type": 1 00:08:07.493 }, 00:08:07.493 { 00:08:07.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.493 "dma_device_type": 2 00:08:07.493 }, 00:08:07.493 { 00:08:07.493 "dma_device_id": "system", 00:08:07.493 "dma_device_type": 1 00:08:07.493 }, 00:08:07.493 { 00:08:07.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.493 "dma_device_type": 2 00:08:07.493 } 00:08:07.493 ], 00:08:07.493 "driver_specific": { 00:08:07.493 "raid": { 00:08:07.493 "uuid": "ed31bc73-0bc5-43bd-a49f-e160a7c93a02", 00:08:07.493 "strip_size_kb": 64, 00:08:07.493 "state": "online", 00:08:07.493 "raid_level": "concat", 00:08:07.493 "superblock": true, 00:08:07.493 "num_base_bdevs": 3, 00:08:07.493 "num_base_bdevs_discovered": 3, 00:08:07.493 "num_base_bdevs_operational": 3, 00:08:07.493 "base_bdevs_list": [ 00:08:07.493 { 00:08:07.493 "name": "BaseBdev1", 00:08:07.493 "uuid": "a3e13687-47c5-44a1-952d-ec487c7d20a5", 00:08:07.493 "is_configured": true, 00:08:07.493 "data_offset": 2048, 00:08:07.493 "data_size": 63488 00:08:07.493 }, 00:08:07.493 { 00:08:07.493 "name": "BaseBdev2", 00:08:07.493 "uuid": "15f4f3a2-34b6-44f0-b2c5-c7726867b677", 00:08:07.493 "is_configured": true, 00:08:07.493 "data_offset": 2048, 00:08:07.493 "data_size": 63488 00:08:07.493 }, 00:08:07.493 { 00:08:07.493 "name": "BaseBdev3", 00:08:07.493 "uuid": "91a03200-c4cc-490d-a6a5-964fdd887628", 00:08:07.493 "is_configured": true, 00:08:07.493 "data_offset": 2048, 00:08:07.493 "data_size": 63488 00:08:07.493 } 00:08:07.493 ] 00:08:07.493 } 00:08:07.493 } 00:08:07.493 }' 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:07.493 BaseBdev2 00:08:07.493 BaseBdev3' 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.493 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.752 [2024-12-06 11:51:05.310754] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:07.752 [2024-12-06 11:51:05.310779] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:07.752 [2024-12-06 11:51:05.310823] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.752 "name": "Existed_Raid", 00:08:07.752 "uuid": "ed31bc73-0bc5-43bd-a49f-e160a7c93a02", 00:08:07.752 "strip_size_kb": 64, 00:08:07.752 "state": "offline", 00:08:07.752 "raid_level": "concat", 00:08:07.752 "superblock": true, 00:08:07.752 "num_base_bdevs": 3, 00:08:07.752 "num_base_bdevs_discovered": 2, 00:08:07.752 "num_base_bdevs_operational": 2, 00:08:07.752 "base_bdevs_list": [ 00:08:07.752 { 00:08:07.752 "name": null, 00:08:07.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.752 "is_configured": false, 00:08:07.752 "data_offset": 0, 00:08:07.752 "data_size": 63488 00:08:07.752 }, 00:08:07.752 { 00:08:07.752 "name": "BaseBdev2", 00:08:07.752 "uuid": "15f4f3a2-34b6-44f0-b2c5-c7726867b677", 00:08:07.752 "is_configured": true, 00:08:07.752 "data_offset": 2048, 00:08:07.752 "data_size": 63488 00:08:07.752 }, 00:08:07.752 { 00:08:07.752 "name": "BaseBdev3", 00:08:07.752 "uuid": "91a03200-c4cc-490d-a6a5-964fdd887628", 00:08:07.752 "is_configured": true, 00:08:07.752 "data_offset": 2048, 00:08:07.752 "data_size": 63488 00:08:07.752 } 00:08:07.752 ] 00:08:07.752 }' 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.752 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.012 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:08.012 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:08.012 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.012 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.012 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:08.012 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.272 [2024-12-06 11:51:05.809050] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.272 [2024-12-06 11:51:05.879929] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:08.272 [2024-12-06 11:51:05.879974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.272 BaseBdev2 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.272 [ 00:08:08.272 { 00:08:08.272 "name": "BaseBdev2", 00:08:08.272 "aliases": [ 00:08:08.272 "aedd9445-4914-4623-a7b7-865511770be1" 00:08:08.272 ], 00:08:08.272 "product_name": "Malloc disk", 00:08:08.272 "block_size": 512, 00:08:08.272 "num_blocks": 65536, 00:08:08.272 "uuid": "aedd9445-4914-4623-a7b7-865511770be1", 00:08:08.272 "assigned_rate_limits": { 00:08:08.272 "rw_ios_per_sec": 0, 00:08:08.272 "rw_mbytes_per_sec": 0, 00:08:08.272 "r_mbytes_per_sec": 0, 00:08:08.272 "w_mbytes_per_sec": 0 00:08:08.272 }, 00:08:08.272 "claimed": false, 00:08:08.272 "zoned": false, 00:08:08.272 "supported_io_types": { 00:08:08.272 "read": true, 00:08:08.272 "write": true, 00:08:08.272 "unmap": true, 00:08:08.272 "flush": true, 00:08:08.272 "reset": true, 00:08:08.272 "nvme_admin": false, 00:08:08.272 "nvme_io": false, 00:08:08.272 "nvme_io_md": false, 00:08:08.272 "write_zeroes": true, 00:08:08.272 "zcopy": true, 00:08:08.272 "get_zone_info": false, 00:08:08.272 "zone_management": false, 00:08:08.272 "zone_append": false, 00:08:08.272 "compare": false, 00:08:08.272 "compare_and_write": false, 00:08:08.272 "abort": true, 00:08:08.272 "seek_hole": false, 00:08:08.272 "seek_data": false, 00:08:08.272 "copy": true, 00:08:08.272 "nvme_iov_md": false 00:08:08.272 }, 00:08:08.272 "memory_domains": [ 00:08:08.272 { 00:08:08.272 "dma_device_id": "system", 00:08:08.272 "dma_device_type": 1 00:08:08.272 }, 00:08:08.272 { 00:08:08.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.272 "dma_device_type": 2 00:08:08.272 } 00:08:08.272 ], 00:08:08.272 "driver_specific": {} 00:08:08.272 } 00:08:08.272 ] 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.272 11:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.272 BaseBdev3 00:08:08.272 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.272 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:08.272 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:08.272 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:08.272 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:08.272 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:08.272 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:08.272 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:08.272 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.272 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.272 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.272 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:08.272 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.273 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.533 [ 00:08:08.533 { 00:08:08.533 "name": "BaseBdev3", 00:08:08.533 "aliases": [ 00:08:08.533 "2922687c-f301-45e4-8391-fa7d651cd899" 00:08:08.533 ], 00:08:08.533 "product_name": "Malloc disk", 00:08:08.533 "block_size": 512, 00:08:08.533 "num_blocks": 65536, 00:08:08.533 "uuid": "2922687c-f301-45e4-8391-fa7d651cd899", 00:08:08.533 "assigned_rate_limits": { 00:08:08.533 "rw_ios_per_sec": 0, 00:08:08.533 "rw_mbytes_per_sec": 0, 00:08:08.533 "r_mbytes_per_sec": 0, 00:08:08.533 "w_mbytes_per_sec": 0 00:08:08.533 }, 00:08:08.533 "claimed": false, 00:08:08.533 "zoned": false, 00:08:08.533 "supported_io_types": { 00:08:08.533 "read": true, 00:08:08.533 "write": true, 00:08:08.533 "unmap": true, 00:08:08.533 "flush": true, 00:08:08.533 "reset": true, 00:08:08.533 "nvme_admin": false, 00:08:08.533 "nvme_io": false, 00:08:08.533 "nvme_io_md": false, 00:08:08.533 "write_zeroes": true, 00:08:08.533 "zcopy": true, 00:08:08.533 "get_zone_info": false, 00:08:08.533 "zone_management": false, 00:08:08.533 "zone_append": false, 00:08:08.533 "compare": false, 00:08:08.533 "compare_and_write": false, 00:08:08.533 "abort": true, 00:08:08.533 "seek_hole": false, 00:08:08.533 "seek_data": false, 00:08:08.533 "copy": true, 00:08:08.533 "nvme_iov_md": false 00:08:08.533 }, 00:08:08.533 "memory_domains": [ 00:08:08.533 { 00:08:08.533 "dma_device_id": "system", 00:08:08.533 "dma_device_type": 1 00:08:08.533 }, 00:08:08.533 { 00:08:08.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.533 "dma_device_type": 2 00:08:08.533 } 00:08:08.533 ], 00:08:08.533 "driver_specific": {} 00:08:08.533 } 00:08:08.533 ] 00:08:08.533 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.533 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:08.533 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:08.533 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:08.533 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:08.533 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.533 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.533 [2024-12-06 11:51:06.054168] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:08.533 [2024-12-06 11:51:06.054211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:08.533 [2024-12-06 11:51:06.054230] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:08.533 [2024-12-06 11:51:06.055980] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:08.533 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.533 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:08.533 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.533 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.533 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:08.533 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.533 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.533 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.533 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.533 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.533 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.533 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.533 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.533 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.533 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.533 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.533 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.533 "name": "Existed_Raid", 00:08:08.533 "uuid": "3d63e4a0-29b5-4bc9-88f9-f516c2d92f7b", 00:08:08.533 "strip_size_kb": 64, 00:08:08.533 "state": "configuring", 00:08:08.533 "raid_level": "concat", 00:08:08.533 "superblock": true, 00:08:08.533 "num_base_bdevs": 3, 00:08:08.533 "num_base_bdevs_discovered": 2, 00:08:08.533 "num_base_bdevs_operational": 3, 00:08:08.533 "base_bdevs_list": [ 00:08:08.533 { 00:08:08.533 "name": "BaseBdev1", 00:08:08.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.533 "is_configured": false, 00:08:08.533 "data_offset": 0, 00:08:08.533 "data_size": 0 00:08:08.533 }, 00:08:08.533 { 00:08:08.533 "name": "BaseBdev2", 00:08:08.533 "uuid": "aedd9445-4914-4623-a7b7-865511770be1", 00:08:08.533 "is_configured": true, 00:08:08.533 "data_offset": 2048, 00:08:08.533 "data_size": 63488 00:08:08.533 }, 00:08:08.533 { 00:08:08.533 "name": "BaseBdev3", 00:08:08.533 "uuid": "2922687c-f301-45e4-8391-fa7d651cd899", 00:08:08.533 "is_configured": true, 00:08:08.533 "data_offset": 2048, 00:08:08.533 "data_size": 63488 00:08:08.533 } 00:08:08.533 ] 00:08:08.533 }' 00:08:08.533 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.533 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.795 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:08.795 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.795 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.795 [2024-12-06 11:51:06.445435] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:08.795 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.795 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:08.795 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.795 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.795 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:08.795 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.795 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.795 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.795 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.795 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.795 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.796 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.796 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.796 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.796 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.796 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.796 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.796 "name": "Existed_Raid", 00:08:08.796 "uuid": "3d63e4a0-29b5-4bc9-88f9-f516c2d92f7b", 00:08:08.796 "strip_size_kb": 64, 00:08:08.796 "state": "configuring", 00:08:08.796 "raid_level": "concat", 00:08:08.796 "superblock": true, 00:08:08.796 "num_base_bdevs": 3, 00:08:08.796 "num_base_bdevs_discovered": 1, 00:08:08.796 "num_base_bdevs_operational": 3, 00:08:08.796 "base_bdevs_list": [ 00:08:08.796 { 00:08:08.796 "name": "BaseBdev1", 00:08:08.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.796 "is_configured": false, 00:08:08.796 "data_offset": 0, 00:08:08.796 "data_size": 0 00:08:08.796 }, 00:08:08.796 { 00:08:08.796 "name": null, 00:08:08.796 "uuid": "aedd9445-4914-4623-a7b7-865511770be1", 00:08:08.796 "is_configured": false, 00:08:08.796 "data_offset": 0, 00:08:08.796 "data_size": 63488 00:08:08.796 }, 00:08:08.796 { 00:08:08.796 "name": "BaseBdev3", 00:08:08.796 "uuid": "2922687c-f301-45e4-8391-fa7d651cd899", 00:08:08.796 "is_configured": true, 00:08:08.796 "data_offset": 2048, 00:08:08.796 "data_size": 63488 00:08:08.796 } 00:08:08.796 ] 00:08:08.796 }' 00:08:08.796 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.796 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.379 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.379 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:09.379 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.379 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.379 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.379 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:09.379 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:09.379 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.379 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.379 [2024-12-06 11:51:06.919405] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:09.379 BaseBdev1 00:08:09.379 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.379 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:09.379 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:09.379 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:09.379 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:09.379 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:09.379 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:09.379 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:09.379 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.379 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.379 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.379 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:09.379 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.379 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.379 [ 00:08:09.379 { 00:08:09.379 "name": "BaseBdev1", 00:08:09.379 "aliases": [ 00:08:09.379 "e72baa82-6d4a-4b73-90eb-dea31a394930" 00:08:09.379 ], 00:08:09.379 "product_name": "Malloc disk", 00:08:09.379 "block_size": 512, 00:08:09.379 "num_blocks": 65536, 00:08:09.379 "uuid": "e72baa82-6d4a-4b73-90eb-dea31a394930", 00:08:09.379 "assigned_rate_limits": { 00:08:09.380 "rw_ios_per_sec": 0, 00:08:09.380 "rw_mbytes_per_sec": 0, 00:08:09.380 "r_mbytes_per_sec": 0, 00:08:09.380 "w_mbytes_per_sec": 0 00:08:09.380 }, 00:08:09.380 "claimed": true, 00:08:09.380 "claim_type": "exclusive_write", 00:08:09.380 "zoned": false, 00:08:09.380 "supported_io_types": { 00:08:09.380 "read": true, 00:08:09.380 "write": true, 00:08:09.380 "unmap": true, 00:08:09.380 "flush": true, 00:08:09.380 "reset": true, 00:08:09.380 "nvme_admin": false, 00:08:09.380 "nvme_io": false, 00:08:09.380 "nvme_io_md": false, 00:08:09.380 "write_zeroes": true, 00:08:09.380 "zcopy": true, 00:08:09.380 "get_zone_info": false, 00:08:09.380 "zone_management": false, 00:08:09.380 "zone_append": false, 00:08:09.380 "compare": false, 00:08:09.380 "compare_and_write": false, 00:08:09.380 "abort": true, 00:08:09.380 "seek_hole": false, 00:08:09.380 "seek_data": false, 00:08:09.380 "copy": true, 00:08:09.380 "nvme_iov_md": false 00:08:09.380 }, 00:08:09.380 "memory_domains": [ 00:08:09.380 { 00:08:09.380 "dma_device_id": "system", 00:08:09.380 "dma_device_type": 1 00:08:09.380 }, 00:08:09.380 { 00:08:09.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.380 "dma_device_type": 2 00:08:09.380 } 00:08:09.380 ], 00:08:09.380 "driver_specific": {} 00:08:09.380 } 00:08:09.380 ] 00:08:09.380 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.380 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:09.380 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:09.380 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.380 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.380 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:09.380 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.380 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.380 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.380 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.380 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.380 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.380 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.380 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.380 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.380 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.380 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.380 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.380 "name": "Existed_Raid", 00:08:09.380 "uuid": "3d63e4a0-29b5-4bc9-88f9-f516c2d92f7b", 00:08:09.380 "strip_size_kb": 64, 00:08:09.380 "state": "configuring", 00:08:09.380 "raid_level": "concat", 00:08:09.380 "superblock": true, 00:08:09.380 "num_base_bdevs": 3, 00:08:09.380 "num_base_bdevs_discovered": 2, 00:08:09.380 "num_base_bdevs_operational": 3, 00:08:09.380 "base_bdevs_list": [ 00:08:09.380 { 00:08:09.380 "name": "BaseBdev1", 00:08:09.380 "uuid": "e72baa82-6d4a-4b73-90eb-dea31a394930", 00:08:09.380 "is_configured": true, 00:08:09.380 "data_offset": 2048, 00:08:09.380 "data_size": 63488 00:08:09.380 }, 00:08:09.380 { 00:08:09.380 "name": null, 00:08:09.380 "uuid": "aedd9445-4914-4623-a7b7-865511770be1", 00:08:09.380 "is_configured": false, 00:08:09.380 "data_offset": 0, 00:08:09.380 "data_size": 63488 00:08:09.380 }, 00:08:09.380 { 00:08:09.380 "name": "BaseBdev3", 00:08:09.380 "uuid": "2922687c-f301-45e4-8391-fa7d651cd899", 00:08:09.380 "is_configured": true, 00:08:09.380 "data_offset": 2048, 00:08:09.380 "data_size": 63488 00:08:09.380 } 00:08:09.380 ] 00:08:09.380 }' 00:08:09.380 11:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.380 11:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.641 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:09.641 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.641 11:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.641 11:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.901 11:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.902 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:09.902 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:09.902 11:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.902 11:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.902 [2024-12-06 11:51:07.410619] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:09.902 11:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.902 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:09.902 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.902 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.902 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:09.902 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.902 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.902 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.902 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.902 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.902 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.902 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.902 11:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.902 11:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.902 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.902 11:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.902 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.902 "name": "Existed_Raid", 00:08:09.902 "uuid": "3d63e4a0-29b5-4bc9-88f9-f516c2d92f7b", 00:08:09.902 "strip_size_kb": 64, 00:08:09.902 "state": "configuring", 00:08:09.902 "raid_level": "concat", 00:08:09.902 "superblock": true, 00:08:09.902 "num_base_bdevs": 3, 00:08:09.902 "num_base_bdevs_discovered": 1, 00:08:09.902 "num_base_bdevs_operational": 3, 00:08:09.902 "base_bdevs_list": [ 00:08:09.902 { 00:08:09.902 "name": "BaseBdev1", 00:08:09.902 "uuid": "e72baa82-6d4a-4b73-90eb-dea31a394930", 00:08:09.902 "is_configured": true, 00:08:09.902 "data_offset": 2048, 00:08:09.902 "data_size": 63488 00:08:09.902 }, 00:08:09.902 { 00:08:09.902 "name": null, 00:08:09.902 "uuid": "aedd9445-4914-4623-a7b7-865511770be1", 00:08:09.902 "is_configured": false, 00:08:09.902 "data_offset": 0, 00:08:09.902 "data_size": 63488 00:08:09.902 }, 00:08:09.902 { 00:08:09.902 "name": null, 00:08:09.902 "uuid": "2922687c-f301-45e4-8391-fa7d651cd899", 00:08:09.902 "is_configured": false, 00:08:09.902 "data_offset": 0, 00:08:09.902 "data_size": 63488 00:08:09.902 } 00:08:09.902 ] 00:08:09.902 }' 00:08:09.902 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.902 11:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.162 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.162 11:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.162 11:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.162 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:10.162 11:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.162 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:10.162 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:10.162 11:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.163 11:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.163 [2024-12-06 11:51:07.873838] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:10.163 11:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.163 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:10.163 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.163 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.163 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:10.163 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.163 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.163 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.163 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.163 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.163 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.163 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.163 11:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.163 11:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.163 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.163 11:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.424 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.424 "name": "Existed_Raid", 00:08:10.424 "uuid": "3d63e4a0-29b5-4bc9-88f9-f516c2d92f7b", 00:08:10.424 "strip_size_kb": 64, 00:08:10.424 "state": "configuring", 00:08:10.424 "raid_level": "concat", 00:08:10.424 "superblock": true, 00:08:10.424 "num_base_bdevs": 3, 00:08:10.424 "num_base_bdevs_discovered": 2, 00:08:10.424 "num_base_bdevs_operational": 3, 00:08:10.424 "base_bdevs_list": [ 00:08:10.424 { 00:08:10.424 "name": "BaseBdev1", 00:08:10.424 "uuid": "e72baa82-6d4a-4b73-90eb-dea31a394930", 00:08:10.424 "is_configured": true, 00:08:10.424 "data_offset": 2048, 00:08:10.424 "data_size": 63488 00:08:10.424 }, 00:08:10.424 { 00:08:10.424 "name": null, 00:08:10.424 "uuid": "aedd9445-4914-4623-a7b7-865511770be1", 00:08:10.424 "is_configured": false, 00:08:10.424 "data_offset": 0, 00:08:10.424 "data_size": 63488 00:08:10.424 }, 00:08:10.424 { 00:08:10.424 "name": "BaseBdev3", 00:08:10.424 "uuid": "2922687c-f301-45e4-8391-fa7d651cd899", 00:08:10.424 "is_configured": true, 00:08:10.424 "data_offset": 2048, 00:08:10.424 "data_size": 63488 00:08:10.424 } 00:08:10.424 ] 00:08:10.424 }' 00:08:10.424 11:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.424 11:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.685 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.685 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:10.685 11:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.685 11:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.685 11:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.685 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:10.685 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:10.685 11:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.685 11:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.685 [2024-12-06 11:51:08.376999] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:10.685 11:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.685 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:10.685 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.685 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.685 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:10.685 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.685 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.685 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.685 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.685 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.685 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.685 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.685 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.685 11:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.685 11:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.685 11:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.961 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.961 "name": "Existed_Raid", 00:08:10.961 "uuid": "3d63e4a0-29b5-4bc9-88f9-f516c2d92f7b", 00:08:10.961 "strip_size_kb": 64, 00:08:10.961 "state": "configuring", 00:08:10.961 "raid_level": "concat", 00:08:10.961 "superblock": true, 00:08:10.961 "num_base_bdevs": 3, 00:08:10.961 "num_base_bdevs_discovered": 1, 00:08:10.961 "num_base_bdevs_operational": 3, 00:08:10.961 "base_bdevs_list": [ 00:08:10.961 { 00:08:10.961 "name": null, 00:08:10.961 "uuid": "e72baa82-6d4a-4b73-90eb-dea31a394930", 00:08:10.961 "is_configured": false, 00:08:10.961 "data_offset": 0, 00:08:10.961 "data_size": 63488 00:08:10.961 }, 00:08:10.961 { 00:08:10.961 "name": null, 00:08:10.961 "uuid": "aedd9445-4914-4623-a7b7-865511770be1", 00:08:10.961 "is_configured": false, 00:08:10.961 "data_offset": 0, 00:08:10.961 "data_size": 63488 00:08:10.961 }, 00:08:10.961 { 00:08:10.961 "name": "BaseBdev3", 00:08:10.961 "uuid": "2922687c-f301-45e4-8391-fa7d651cd899", 00:08:10.961 "is_configured": true, 00:08:10.961 "data_offset": 2048, 00:08:10.961 "data_size": 63488 00:08:10.961 } 00:08:10.961 ] 00:08:10.961 }' 00:08:10.961 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.961 11:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.231 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.231 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:11.231 11:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.231 11:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.231 11:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.231 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:11.231 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:11.231 11:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.231 11:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.231 [2024-12-06 11:51:08.858640] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:11.231 11:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.231 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:11.231 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.231 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.231 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:11.231 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.231 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.231 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.231 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.231 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.231 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.231 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.231 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.231 11:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.231 11:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.231 11:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.231 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.231 "name": "Existed_Raid", 00:08:11.231 "uuid": "3d63e4a0-29b5-4bc9-88f9-f516c2d92f7b", 00:08:11.231 "strip_size_kb": 64, 00:08:11.231 "state": "configuring", 00:08:11.231 "raid_level": "concat", 00:08:11.231 "superblock": true, 00:08:11.231 "num_base_bdevs": 3, 00:08:11.231 "num_base_bdevs_discovered": 2, 00:08:11.231 "num_base_bdevs_operational": 3, 00:08:11.231 "base_bdevs_list": [ 00:08:11.231 { 00:08:11.231 "name": null, 00:08:11.231 "uuid": "e72baa82-6d4a-4b73-90eb-dea31a394930", 00:08:11.231 "is_configured": false, 00:08:11.231 "data_offset": 0, 00:08:11.231 "data_size": 63488 00:08:11.231 }, 00:08:11.231 { 00:08:11.231 "name": "BaseBdev2", 00:08:11.231 "uuid": "aedd9445-4914-4623-a7b7-865511770be1", 00:08:11.231 "is_configured": true, 00:08:11.231 "data_offset": 2048, 00:08:11.231 "data_size": 63488 00:08:11.231 }, 00:08:11.231 { 00:08:11.231 "name": "BaseBdev3", 00:08:11.231 "uuid": "2922687c-f301-45e4-8391-fa7d651cd899", 00:08:11.231 "is_configured": true, 00:08:11.231 "data_offset": 2048, 00:08:11.231 "data_size": 63488 00:08:11.231 } 00:08:11.231 ] 00:08:11.231 }' 00:08:11.231 11:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.231 11:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e72baa82-6d4a-4b73-90eb-dea31a394930 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.803 [2024-12-06 11:51:09.340461] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:11.803 [2024-12-06 11:51:09.340703] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:11.803 [2024-12-06 11:51:09.340752] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:11.803 NewBaseBdev 00:08:11.803 [2024-12-06 11:51:09.340997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.803 [2024-12-06 11:51:09.341146] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:11.803 [2024-12-06 11:51:09.341184] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:11.803 [2024-12-06 11:51:09.341337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.803 [ 00:08:11.803 { 00:08:11.803 "name": "NewBaseBdev", 00:08:11.803 "aliases": [ 00:08:11.803 "e72baa82-6d4a-4b73-90eb-dea31a394930" 00:08:11.803 ], 00:08:11.803 "product_name": "Malloc disk", 00:08:11.803 "block_size": 512, 00:08:11.803 "num_blocks": 65536, 00:08:11.803 "uuid": "e72baa82-6d4a-4b73-90eb-dea31a394930", 00:08:11.803 "assigned_rate_limits": { 00:08:11.803 "rw_ios_per_sec": 0, 00:08:11.803 "rw_mbytes_per_sec": 0, 00:08:11.803 "r_mbytes_per_sec": 0, 00:08:11.803 "w_mbytes_per_sec": 0 00:08:11.803 }, 00:08:11.803 "claimed": true, 00:08:11.803 "claim_type": "exclusive_write", 00:08:11.803 "zoned": false, 00:08:11.803 "supported_io_types": { 00:08:11.803 "read": true, 00:08:11.803 "write": true, 00:08:11.803 "unmap": true, 00:08:11.803 "flush": true, 00:08:11.803 "reset": true, 00:08:11.803 "nvme_admin": false, 00:08:11.803 "nvme_io": false, 00:08:11.803 "nvme_io_md": false, 00:08:11.803 "write_zeroes": true, 00:08:11.803 "zcopy": true, 00:08:11.803 "get_zone_info": false, 00:08:11.803 "zone_management": false, 00:08:11.803 "zone_append": false, 00:08:11.803 "compare": false, 00:08:11.803 "compare_and_write": false, 00:08:11.803 "abort": true, 00:08:11.803 "seek_hole": false, 00:08:11.803 "seek_data": false, 00:08:11.803 "copy": true, 00:08:11.803 "nvme_iov_md": false 00:08:11.803 }, 00:08:11.803 "memory_domains": [ 00:08:11.803 { 00:08:11.803 "dma_device_id": "system", 00:08:11.803 "dma_device_type": 1 00:08:11.803 }, 00:08:11.803 { 00:08:11.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.803 "dma_device_type": 2 00:08:11.803 } 00:08:11.803 ], 00:08:11.803 "driver_specific": {} 00:08:11.803 } 00:08:11.803 ] 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.803 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.804 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.804 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.804 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.804 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.804 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.804 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.804 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.804 "name": "Existed_Raid", 00:08:11.804 "uuid": "3d63e4a0-29b5-4bc9-88f9-f516c2d92f7b", 00:08:11.804 "strip_size_kb": 64, 00:08:11.804 "state": "online", 00:08:11.804 "raid_level": "concat", 00:08:11.804 "superblock": true, 00:08:11.804 "num_base_bdevs": 3, 00:08:11.804 "num_base_bdevs_discovered": 3, 00:08:11.804 "num_base_bdevs_operational": 3, 00:08:11.804 "base_bdevs_list": [ 00:08:11.804 { 00:08:11.804 "name": "NewBaseBdev", 00:08:11.804 "uuid": "e72baa82-6d4a-4b73-90eb-dea31a394930", 00:08:11.804 "is_configured": true, 00:08:11.804 "data_offset": 2048, 00:08:11.804 "data_size": 63488 00:08:11.804 }, 00:08:11.804 { 00:08:11.804 "name": "BaseBdev2", 00:08:11.804 "uuid": "aedd9445-4914-4623-a7b7-865511770be1", 00:08:11.804 "is_configured": true, 00:08:11.804 "data_offset": 2048, 00:08:11.804 "data_size": 63488 00:08:11.804 }, 00:08:11.804 { 00:08:11.804 "name": "BaseBdev3", 00:08:11.804 "uuid": "2922687c-f301-45e4-8391-fa7d651cd899", 00:08:11.804 "is_configured": true, 00:08:11.804 "data_offset": 2048, 00:08:11.804 "data_size": 63488 00:08:11.804 } 00:08:11.804 ] 00:08:11.804 }' 00:08:11.804 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.804 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.064 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:12.064 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:12.064 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:12.064 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:12.064 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:12.064 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:12.064 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:12.064 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.064 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.064 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:12.064 [2024-12-06 11:51:09.807991] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:12.324 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.324 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:12.324 "name": "Existed_Raid", 00:08:12.324 "aliases": [ 00:08:12.324 "3d63e4a0-29b5-4bc9-88f9-f516c2d92f7b" 00:08:12.324 ], 00:08:12.324 "product_name": "Raid Volume", 00:08:12.324 "block_size": 512, 00:08:12.324 "num_blocks": 190464, 00:08:12.324 "uuid": "3d63e4a0-29b5-4bc9-88f9-f516c2d92f7b", 00:08:12.324 "assigned_rate_limits": { 00:08:12.324 "rw_ios_per_sec": 0, 00:08:12.324 "rw_mbytes_per_sec": 0, 00:08:12.324 "r_mbytes_per_sec": 0, 00:08:12.324 "w_mbytes_per_sec": 0 00:08:12.324 }, 00:08:12.324 "claimed": false, 00:08:12.324 "zoned": false, 00:08:12.324 "supported_io_types": { 00:08:12.324 "read": true, 00:08:12.324 "write": true, 00:08:12.324 "unmap": true, 00:08:12.324 "flush": true, 00:08:12.324 "reset": true, 00:08:12.324 "nvme_admin": false, 00:08:12.324 "nvme_io": false, 00:08:12.324 "nvme_io_md": false, 00:08:12.324 "write_zeroes": true, 00:08:12.324 "zcopy": false, 00:08:12.324 "get_zone_info": false, 00:08:12.324 "zone_management": false, 00:08:12.324 "zone_append": false, 00:08:12.324 "compare": false, 00:08:12.324 "compare_and_write": false, 00:08:12.324 "abort": false, 00:08:12.324 "seek_hole": false, 00:08:12.324 "seek_data": false, 00:08:12.324 "copy": false, 00:08:12.324 "nvme_iov_md": false 00:08:12.324 }, 00:08:12.324 "memory_domains": [ 00:08:12.324 { 00:08:12.324 "dma_device_id": "system", 00:08:12.324 "dma_device_type": 1 00:08:12.324 }, 00:08:12.324 { 00:08:12.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.324 "dma_device_type": 2 00:08:12.324 }, 00:08:12.324 { 00:08:12.324 "dma_device_id": "system", 00:08:12.324 "dma_device_type": 1 00:08:12.324 }, 00:08:12.324 { 00:08:12.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.324 "dma_device_type": 2 00:08:12.324 }, 00:08:12.324 { 00:08:12.324 "dma_device_id": "system", 00:08:12.324 "dma_device_type": 1 00:08:12.324 }, 00:08:12.324 { 00:08:12.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.324 "dma_device_type": 2 00:08:12.324 } 00:08:12.324 ], 00:08:12.324 "driver_specific": { 00:08:12.324 "raid": { 00:08:12.324 "uuid": "3d63e4a0-29b5-4bc9-88f9-f516c2d92f7b", 00:08:12.324 "strip_size_kb": 64, 00:08:12.324 "state": "online", 00:08:12.324 "raid_level": "concat", 00:08:12.324 "superblock": true, 00:08:12.324 "num_base_bdevs": 3, 00:08:12.324 "num_base_bdevs_discovered": 3, 00:08:12.324 "num_base_bdevs_operational": 3, 00:08:12.324 "base_bdevs_list": [ 00:08:12.324 { 00:08:12.324 "name": "NewBaseBdev", 00:08:12.324 "uuid": "e72baa82-6d4a-4b73-90eb-dea31a394930", 00:08:12.324 "is_configured": true, 00:08:12.324 "data_offset": 2048, 00:08:12.324 "data_size": 63488 00:08:12.324 }, 00:08:12.324 { 00:08:12.324 "name": "BaseBdev2", 00:08:12.324 "uuid": "aedd9445-4914-4623-a7b7-865511770be1", 00:08:12.324 "is_configured": true, 00:08:12.324 "data_offset": 2048, 00:08:12.324 "data_size": 63488 00:08:12.324 }, 00:08:12.324 { 00:08:12.324 "name": "BaseBdev3", 00:08:12.324 "uuid": "2922687c-f301-45e4-8391-fa7d651cd899", 00:08:12.324 "is_configured": true, 00:08:12.324 "data_offset": 2048, 00:08:12.324 "data_size": 63488 00:08:12.324 } 00:08:12.324 ] 00:08:12.324 } 00:08:12.324 } 00:08:12.324 }' 00:08:12.324 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:12.324 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:12.324 BaseBdev2 00:08:12.324 BaseBdev3' 00:08:12.324 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.324 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:12.324 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.324 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:12.325 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.325 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.325 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.325 11:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.325 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.325 11:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.325 11:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.325 11:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:12.325 11:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.325 11:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.325 11:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.325 11:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.325 11:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.325 11:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.325 11:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.325 11:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:12.325 11:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.325 11:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.325 11:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.325 11:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.585 11:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.585 11:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.585 11:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:12.585 11:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.585 11:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.585 [2024-12-06 11:51:10.087292] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:12.585 [2024-12-06 11:51:10.087352] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:12.585 [2024-12-06 11:51:10.087429] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.585 [2024-12-06 11:51:10.087491] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:12.585 [2024-12-06 11:51:10.087537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:12.585 11:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.585 11:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77031 00:08:12.585 11:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 77031 ']' 00:08:12.585 11:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 77031 00:08:12.585 11:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:12.585 11:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:12.585 11:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77031 00:08:12.585 killing process with pid 77031 00:08:12.585 11:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:12.585 11:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:12.585 11:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77031' 00:08:12.585 11:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 77031 00:08:12.585 [2024-12-06 11:51:10.127614] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:12.585 11:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 77031 00:08:12.585 [2024-12-06 11:51:10.158493] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:12.845 11:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:12.845 00:08:12.845 real 0m8.604s 00:08:12.845 user 0m14.672s 00:08:12.845 sys 0m1.687s 00:08:12.845 11:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.845 ************************************ 00:08:12.845 END TEST raid_state_function_test_sb 00:08:12.845 ************************************ 00:08:12.845 11:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.845 11:51:10 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:08:12.845 11:51:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:12.845 11:51:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.845 11:51:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:12.845 ************************************ 00:08:12.845 START TEST raid_superblock_test 00:08:12.845 ************************************ 00:08:12.845 11:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:08:12.845 11:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:12.845 11:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:12.845 11:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:12.845 11:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:12.845 11:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:12.845 11:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:12.845 11:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:12.845 11:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:12.845 11:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:12.845 11:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:12.845 11:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:12.845 11:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:12.845 11:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:12.845 11:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:12.845 11:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:12.845 11:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:12.845 11:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=77629 00:08:12.845 11:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:12.845 11:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 77629 00:08:12.845 11:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 77629 ']' 00:08:12.845 11:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.845 11:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:12.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.845 11:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.845 11:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:12.845 11:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.845 [2024-12-06 11:51:10.557041] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:12.845 [2024-12-06 11:51:10.557153] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77629 ] 00:08:13.105 [2024-12-06 11:51:10.701392] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.105 [2024-12-06 11:51:10.745177] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.105 [2024-12-06 11:51:10.786482] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.105 [2024-12-06 11:51:10.786517] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.676 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:13.676 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:13.676 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:13.676 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:13.676 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:13.676 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:13.676 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:13.676 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:13.676 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:13.676 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:13.676 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:13.676 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.676 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.676 malloc1 00:08:13.676 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.676 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:13.676 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.676 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.676 [2024-12-06 11:51:11.391771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:13.676 [2024-12-06 11:51:11.391886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.676 [2024-12-06 11:51:11.391928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:08:13.676 [2024-12-06 11:51:11.391965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.676 [2024-12-06 11:51:11.394034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.676 [2024-12-06 11:51:11.394119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:13.676 pt1 00:08:13.676 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.676 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:13.676 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:13.676 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:13.676 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:13.676 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:13.676 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:13.676 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:13.676 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:13.676 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:13.676 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.676 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.936 malloc2 00:08:13.936 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.936 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:13.936 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.936 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.936 [2024-12-06 11:51:11.440663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:13.936 [2024-12-06 11:51:11.440872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.936 [2024-12-06 11:51:11.440959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:13.936 [2024-12-06 11:51:11.441055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.936 [2024-12-06 11:51:11.445690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.936 [2024-12-06 11:51:11.445806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:13.936 pt2 00:08:13.936 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.936 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:13.936 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:13.936 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:13.936 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:13.936 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:13.936 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:13.936 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:13.936 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:13.936 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:13.936 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.937 malloc3 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.937 [2024-12-06 11:51:11.474827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:13.937 [2024-12-06 11:51:11.474931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.937 [2024-12-06 11:51:11.474964] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:13.937 [2024-12-06 11:51:11.474992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.937 [2024-12-06 11:51:11.476987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.937 [2024-12-06 11:51:11.477073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:13.937 pt3 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.937 [2024-12-06 11:51:11.486881] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:13.937 [2024-12-06 11:51:11.488716] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:13.937 [2024-12-06 11:51:11.488820] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:13.937 [2024-12-06 11:51:11.488993] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:08:13.937 [2024-12-06 11:51:11.489039] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:13.937 [2024-12-06 11:51:11.489311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:13.937 [2024-12-06 11:51:11.489479] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:08:13.937 [2024-12-06 11:51:11.489528] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:08:13.937 [2024-12-06 11:51:11.489679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.937 "name": "raid_bdev1", 00:08:13.937 "uuid": "cd6a866e-ad85-406f-85bd-560f156699e5", 00:08:13.937 "strip_size_kb": 64, 00:08:13.937 "state": "online", 00:08:13.937 "raid_level": "concat", 00:08:13.937 "superblock": true, 00:08:13.937 "num_base_bdevs": 3, 00:08:13.937 "num_base_bdevs_discovered": 3, 00:08:13.937 "num_base_bdevs_operational": 3, 00:08:13.937 "base_bdevs_list": [ 00:08:13.937 { 00:08:13.937 "name": "pt1", 00:08:13.937 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:13.937 "is_configured": true, 00:08:13.937 "data_offset": 2048, 00:08:13.937 "data_size": 63488 00:08:13.937 }, 00:08:13.937 { 00:08:13.937 "name": "pt2", 00:08:13.937 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.937 "is_configured": true, 00:08:13.937 "data_offset": 2048, 00:08:13.937 "data_size": 63488 00:08:13.937 }, 00:08:13.937 { 00:08:13.937 "name": "pt3", 00:08:13.937 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:13.937 "is_configured": true, 00:08:13.937 "data_offset": 2048, 00:08:13.937 "data_size": 63488 00:08:13.937 } 00:08:13.937 ] 00:08:13.937 }' 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.937 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.197 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:14.197 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:14.197 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:14.197 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:14.197 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:14.197 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:14.197 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:14.197 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:14.197 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.197 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.197 [2024-12-06 11:51:11.946329] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:14.458 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.458 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:14.458 "name": "raid_bdev1", 00:08:14.458 "aliases": [ 00:08:14.458 "cd6a866e-ad85-406f-85bd-560f156699e5" 00:08:14.458 ], 00:08:14.458 "product_name": "Raid Volume", 00:08:14.458 "block_size": 512, 00:08:14.458 "num_blocks": 190464, 00:08:14.458 "uuid": "cd6a866e-ad85-406f-85bd-560f156699e5", 00:08:14.458 "assigned_rate_limits": { 00:08:14.458 "rw_ios_per_sec": 0, 00:08:14.458 "rw_mbytes_per_sec": 0, 00:08:14.458 "r_mbytes_per_sec": 0, 00:08:14.458 "w_mbytes_per_sec": 0 00:08:14.458 }, 00:08:14.458 "claimed": false, 00:08:14.458 "zoned": false, 00:08:14.458 "supported_io_types": { 00:08:14.458 "read": true, 00:08:14.458 "write": true, 00:08:14.458 "unmap": true, 00:08:14.458 "flush": true, 00:08:14.458 "reset": true, 00:08:14.458 "nvme_admin": false, 00:08:14.458 "nvme_io": false, 00:08:14.458 "nvme_io_md": false, 00:08:14.458 "write_zeroes": true, 00:08:14.458 "zcopy": false, 00:08:14.458 "get_zone_info": false, 00:08:14.458 "zone_management": false, 00:08:14.458 "zone_append": false, 00:08:14.458 "compare": false, 00:08:14.458 "compare_and_write": false, 00:08:14.458 "abort": false, 00:08:14.458 "seek_hole": false, 00:08:14.458 "seek_data": false, 00:08:14.458 "copy": false, 00:08:14.458 "nvme_iov_md": false 00:08:14.458 }, 00:08:14.458 "memory_domains": [ 00:08:14.458 { 00:08:14.458 "dma_device_id": "system", 00:08:14.458 "dma_device_type": 1 00:08:14.458 }, 00:08:14.458 { 00:08:14.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.458 "dma_device_type": 2 00:08:14.458 }, 00:08:14.458 { 00:08:14.458 "dma_device_id": "system", 00:08:14.458 "dma_device_type": 1 00:08:14.458 }, 00:08:14.458 { 00:08:14.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.458 "dma_device_type": 2 00:08:14.458 }, 00:08:14.458 { 00:08:14.458 "dma_device_id": "system", 00:08:14.458 "dma_device_type": 1 00:08:14.458 }, 00:08:14.458 { 00:08:14.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.458 "dma_device_type": 2 00:08:14.458 } 00:08:14.458 ], 00:08:14.458 "driver_specific": { 00:08:14.458 "raid": { 00:08:14.458 "uuid": "cd6a866e-ad85-406f-85bd-560f156699e5", 00:08:14.458 "strip_size_kb": 64, 00:08:14.458 "state": "online", 00:08:14.458 "raid_level": "concat", 00:08:14.458 "superblock": true, 00:08:14.458 "num_base_bdevs": 3, 00:08:14.458 "num_base_bdevs_discovered": 3, 00:08:14.458 "num_base_bdevs_operational": 3, 00:08:14.458 "base_bdevs_list": [ 00:08:14.458 { 00:08:14.458 "name": "pt1", 00:08:14.458 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.458 "is_configured": true, 00:08:14.458 "data_offset": 2048, 00:08:14.458 "data_size": 63488 00:08:14.458 }, 00:08:14.458 { 00:08:14.458 "name": "pt2", 00:08:14.458 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.458 "is_configured": true, 00:08:14.458 "data_offset": 2048, 00:08:14.458 "data_size": 63488 00:08:14.458 }, 00:08:14.458 { 00:08:14.458 "name": "pt3", 00:08:14.458 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:14.458 "is_configured": true, 00:08:14.458 "data_offset": 2048, 00:08:14.458 "data_size": 63488 00:08:14.458 } 00:08:14.458 ] 00:08:14.458 } 00:08:14.458 } 00:08:14.458 }' 00:08:14.458 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:14.458 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:14.458 pt2 00:08:14.458 pt3' 00:08:14.458 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.458 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:14.458 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:14.458 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:14.458 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.458 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.458 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.458 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.458 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:14.458 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:14.458 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:14.458 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.458 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:14.458 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.458 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.458 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.458 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:14.458 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:14.458 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:14.458 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:14.458 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.458 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.458 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.458 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.719 [2024-12-06 11:51:12.233749] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cd6a866e-ad85-406f-85bd-560f156699e5 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z cd6a866e-ad85-406f-85bd-560f156699e5 ']' 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.719 [2024-12-06 11:51:12.281435] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:14.719 [2024-12-06 11:51:12.281492] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:14.719 [2024-12-06 11:51:12.281595] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:14.719 [2024-12-06 11:51:12.281669] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:14.719 [2024-12-06 11:51:12.281724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.719 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.719 [2024-12-06 11:51:12.433211] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:14.719 [2024-12-06 11:51:12.435045] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:14.720 [2024-12-06 11:51:12.435124] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:14.720 [2024-12-06 11:51:12.435226] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:14.720 [2024-12-06 11:51:12.435317] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:14.720 [2024-12-06 11:51:12.435374] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:14.720 [2024-12-06 11:51:12.435417] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:14.720 [2024-12-06 11:51:12.435463] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:08:14.720 request: 00:08:14.720 { 00:08:14.720 "name": "raid_bdev1", 00:08:14.720 "raid_level": "concat", 00:08:14.720 "base_bdevs": [ 00:08:14.720 "malloc1", 00:08:14.720 "malloc2", 00:08:14.720 "malloc3" 00:08:14.720 ], 00:08:14.720 "strip_size_kb": 64, 00:08:14.720 "superblock": false, 00:08:14.720 "method": "bdev_raid_create", 00:08:14.720 "req_id": 1 00:08:14.720 } 00:08:14.720 Got JSON-RPC error response 00:08:14.720 response: 00:08:14.720 { 00:08:14.720 "code": -17, 00:08:14.720 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:14.720 } 00:08:14.720 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:14.720 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:14.720 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:14.720 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:14.720 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:14.720 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.720 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.720 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:14.720 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.720 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.720 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:14.720 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:14.980 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:14.980 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.980 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.980 [2024-12-06 11:51:12.481094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:14.980 [2024-12-06 11:51:12.481189] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.980 [2024-12-06 11:51:12.481218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:14.980 [2024-12-06 11:51:12.481255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.980 [2024-12-06 11:51:12.483293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.980 [2024-12-06 11:51:12.483361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:14.980 [2024-12-06 11:51:12.483456] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:14.980 [2024-12-06 11:51:12.483512] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:14.980 pt1 00:08:14.980 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.980 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:14.980 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.980 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.980 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:14.980 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.980 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.980 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.980 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.980 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.980 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.980 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.980 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.980 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.980 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.980 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.980 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.980 "name": "raid_bdev1", 00:08:14.980 "uuid": "cd6a866e-ad85-406f-85bd-560f156699e5", 00:08:14.980 "strip_size_kb": 64, 00:08:14.980 "state": "configuring", 00:08:14.980 "raid_level": "concat", 00:08:14.980 "superblock": true, 00:08:14.980 "num_base_bdevs": 3, 00:08:14.980 "num_base_bdevs_discovered": 1, 00:08:14.980 "num_base_bdevs_operational": 3, 00:08:14.980 "base_bdevs_list": [ 00:08:14.980 { 00:08:14.980 "name": "pt1", 00:08:14.980 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.980 "is_configured": true, 00:08:14.980 "data_offset": 2048, 00:08:14.980 "data_size": 63488 00:08:14.980 }, 00:08:14.980 { 00:08:14.980 "name": null, 00:08:14.980 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.980 "is_configured": false, 00:08:14.980 "data_offset": 2048, 00:08:14.980 "data_size": 63488 00:08:14.980 }, 00:08:14.980 { 00:08:14.980 "name": null, 00:08:14.980 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:14.980 "is_configured": false, 00:08:14.980 "data_offset": 2048, 00:08:14.980 "data_size": 63488 00:08:14.980 } 00:08:14.980 ] 00:08:14.980 }' 00:08:14.980 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.980 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.240 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:15.241 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:15.241 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.241 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.241 [2024-12-06 11:51:12.936333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:15.241 [2024-12-06 11:51:12.936446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.241 [2024-12-06 11:51:12.936483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:15.241 [2024-12-06 11:51:12.936514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.241 [2024-12-06 11:51:12.936872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.241 [2024-12-06 11:51:12.936926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:15.241 [2024-12-06 11:51:12.937010] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:15.241 [2024-12-06 11:51:12.937062] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:15.241 pt2 00:08:15.241 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.241 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:15.241 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.241 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.241 [2024-12-06 11:51:12.948316] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:15.241 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.241 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:15.241 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:15.241 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.241 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:15.241 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.241 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.241 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.241 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.241 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.241 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.241 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.241 11:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.241 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.241 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.241 11:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.501 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.501 "name": "raid_bdev1", 00:08:15.501 "uuid": "cd6a866e-ad85-406f-85bd-560f156699e5", 00:08:15.501 "strip_size_kb": 64, 00:08:15.501 "state": "configuring", 00:08:15.501 "raid_level": "concat", 00:08:15.501 "superblock": true, 00:08:15.501 "num_base_bdevs": 3, 00:08:15.501 "num_base_bdevs_discovered": 1, 00:08:15.501 "num_base_bdevs_operational": 3, 00:08:15.501 "base_bdevs_list": [ 00:08:15.501 { 00:08:15.501 "name": "pt1", 00:08:15.501 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:15.501 "is_configured": true, 00:08:15.501 "data_offset": 2048, 00:08:15.501 "data_size": 63488 00:08:15.501 }, 00:08:15.501 { 00:08:15.501 "name": null, 00:08:15.501 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.501 "is_configured": false, 00:08:15.501 "data_offset": 0, 00:08:15.501 "data_size": 63488 00:08:15.501 }, 00:08:15.501 { 00:08:15.501 "name": null, 00:08:15.501 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:15.501 "is_configured": false, 00:08:15.501 "data_offset": 2048, 00:08:15.501 "data_size": 63488 00:08:15.501 } 00:08:15.501 ] 00:08:15.501 }' 00:08:15.501 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.501 11:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.762 [2024-12-06 11:51:13.355569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:15.762 [2024-12-06 11:51:13.355664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.762 [2024-12-06 11:51:13.355696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:15.762 [2024-12-06 11:51:13.355722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.762 [2024-12-06 11:51:13.356059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.762 [2024-12-06 11:51:13.356110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:15.762 [2024-12-06 11:51:13.356191] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:15.762 [2024-12-06 11:51:13.356252] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:15.762 pt2 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.762 [2024-12-06 11:51:13.367553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:15.762 [2024-12-06 11:51:13.367625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.762 [2024-12-06 11:51:13.367657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:15.762 [2024-12-06 11:51:13.367682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.762 [2024-12-06 11:51:13.368002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.762 [2024-12-06 11:51:13.368051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:15.762 [2024-12-06 11:51:13.368126] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:15.762 [2024-12-06 11:51:13.368178] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:15.762 [2024-12-06 11:51:13.368303] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:15.762 [2024-12-06 11:51:13.368339] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:15.762 [2024-12-06 11:51:13.368567] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:15.762 [2024-12-06 11:51:13.368696] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:15.762 [2024-12-06 11:51:13.368733] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:15.762 [2024-12-06 11:51:13.368854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.762 pt3 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.762 "name": "raid_bdev1", 00:08:15.762 "uuid": "cd6a866e-ad85-406f-85bd-560f156699e5", 00:08:15.762 "strip_size_kb": 64, 00:08:15.762 "state": "online", 00:08:15.762 "raid_level": "concat", 00:08:15.762 "superblock": true, 00:08:15.762 "num_base_bdevs": 3, 00:08:15.762 "num_base_bdevs_discovered": 3, 00:08:15.762 "num_base_bdevs_operational": 3, 00:08:15.762 "base_bdevs_list": [ 00:08:15.762 { 00:08:15.762 "name": "pt1", 00:08:15.762 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:15.762 "is_configured": true, 00:08:15.762 "data_offset": 2048, 00:08:15.762 "data_size": 63488 00:08:15.762 }, 00:08:15.762 { 00:08:15.762 "name": "pt2", 00:08:15.762 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.762 "is_configured": true, 00:08:15.762 "data_offset": 2048, 00:08:15.762 "data_size": 63488 00:08:15.762 }, 00:08:15.762 { 00:08:15.762 "name": "pt3", 00:08:15.762 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:15.762 "is_configured": true, 00:08:15.762 "data_offset": 2048, 00:08:15.762 "data_size": 63488 00:08:15.762 } 00:08:15.762 ] 00:08:15.762 }' 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.762 11:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:16.333 [2024-12-06 11:51:13.815057] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:16.333 "name": "raid_bdev1", 00:08:16.333 "aliases": [ 00:08:16.333 "cd6a866e-ad85-406f-85bd-560f156699e5" 00:08:16.333 ], 00:08:16.333 "product_name": "Raid Volume", 00:08:16.333 "block_size": 512, 00:08:16.333 "num_blocks": 190464, 00:08:16.333 "uuid": "cd6a866e-ad85-406f-85bd-560f156699e5", 00:08:16.333 "assigned_rate_limits": { 00:08:16.333 "rw_ios_per_sec": 0, 00:08:16.333 "rw_mbytes_per_sec": 0, 00:08:16.333 "r_mbytes_per_sec": 0, 00:08:16.333 "w_mbytes_per_sec": 0 00:08:16.333 }, 00:08:16.333 "claimed": false, 00:08:16.333 "zoned": false, 00:08:16.333 "supported_io_types": { 00:08:16.333 "read": true, 00:08:16.333 "write": true, 00:08:16.333 "unmap": true, 00:08:16.333 "flush": true, 00:08:16.333 "reset": true, 00:08:16.333 "nvme_admin": false, 00:08:16.333 "nvme_io": false, 00:08:16.333 "nvme_io_md": false, 00:08:16.333 "write_zeroes": true, 00:08:16.333 "zcopy": false, 00:08:16.333 "get_zone_info": false, 00:08:16.333 "zone_management": false, 00:08:16.333 "zone_append": false, 00:08:16.333 "compare": false, 00:08:16.333 "compare_and_write": false, 00:08:16.333 "abort": false, 00:08:16.333 "seek_hole": false, 00:08:16.333 "seek_data": false, 00:08:16.333 "copy": false, 00:08:16.333 "nvme_iov_md": false 00:08:16.333 }, 00:08:16.333 "memory_domains": [ 00:08:16.333 { 00:08:16.333 "dma_device_id": "system", 00:08:16.333 "dma_device_type": 1 00:08:16.333 }, 00:08:16.333 { 00:08:16.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.333 "dma_device_type": 2 00:08:16.333 }, 00:08:16.333 { 00:08:16.333 "dma_device_id": "system", 00:08:16.333 "dma_device_type": 1 00:08:16.333 }, 00:08:16.333 { 00:08:16.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.333 "dma_device_type": 2 00:08:16.333 }, 00:08:16.333 { 00:08:16.333 "dma_device_id": "system", 00:08:16.333 "dma_device_type": 1 00:08:16.333 }, 00:08:16.333 { 00:08:16.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.333 "dma_device_type": 2 00:08:16.333 } 00:08:16.333 ], 00:08:16.333 "driver_specific": { 00:08:16.333 "raid": { 00:08:16.333 "uuid": "cd6a866e-ad85-406f-85bd-560f156699e5", 00:08:16.333 "strip_size_kb": 64, 00:08:16.333 "state": "online", 00:08:16.333 "raid_level": "concat", 00:08:16.333 "superblock": true, 00:08:16.333 "num_base_bdevs": 3, 00:08:16.333 "num_base_bdevs_discovered": 3, 00:08:16.333 "num_base_bdevs_operational": 3, 00:08:16.333 "base_bdevs_list": [ 00:08:16.333 { 00:08:16.333 "name": "pt1", 00:08:16.333 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:16.333 "is_configured": true, 00:08:16.333 "data_offset": 2048, 00:08:16.333 "data_size": 63488 00:08:16.333 }, 00:08:16.333 { 00:08:16.333 "name": "pt2", 00:08:16.333 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:16.333 "is_configured": true, 00:08:16.333 "data_offset": 2048, 00:08:16.333 "data_size": 63488 00:08:16.333 }, 00:08:16.333 { 00:08:16.333 "name": "pt3", 00:08:16.333 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:16.333 "is_configured": true, 00:08:16.333 "data_offset": 2048, 00:08:16.333 "data_size": 63488 00:08:16.333 } 00:08:16.333 ] 00:08:16.333 } 00:08:16.333 } 00:08:16.333 }' 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:16.333 pt2 00:08:16.333 pt3' 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.333 11:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.333 11:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.333 11:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.333 11:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.333 11:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:16.333 11:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.333 11:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.333 11:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.333 11:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.333 11:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.333 11:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.333 11:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:16.333 11:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:16.333 11:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.333 11:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.333 [2024-12-06 11:51:14.070588] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:16.593 11:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.593 11:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' cd6a866e-ad85-406f-85bd-560f156699e5 '!=' cd6a866e-ad85-406f-85bd-560f156699e5 ']' 00:08:16.593 11:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:16.593 11:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:16.593 11:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:16.593 11:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 77629 00:08:16.593 11:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 77629 ']' 00:08:16.593 11:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 77629 00:08:16.593 11:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:16.593 11:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:16.593 11:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77629 00:08:16.593 11:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:16.593 11:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:16.593 11:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77629' 00:08:16.593 killing process with pid 77629 00:08:16.593 11:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 77629 00:08:16.593 [2024-12-06 11:51:14.152456] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:16.593 [2024-12-06 11:51:14.152529] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.594 [2024-12-06 11:51:14.152597] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:16.594 [2024-12-06 11:51:14.152607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:16.594 11:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 77629 00:08:16.594 [2024-12-06 11:51:14.185784] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:16.854 11:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:16.854 00:08:16.854 real 0m3.955s 00:08:16.854 user 0m6.242s 00:08:16.854 sys 0m0.808s 00:08:16.854 ************************************ 00:08:16.854 END TEST raid_superblock_test 00:08:16.854 ************************************ 00:08:16.854 11:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.854 11:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.854 11:51:14 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:08:16.854 11:51:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:16.854 11:51:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.854 11:51:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:16.854 ************************************ 00:08:16.854 START TEST raid_read_error_test 00:08:16.854 ************************************ 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CoKr6AoHST 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=77871 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 77871 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 77871 ']' 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:16.854 11:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.854 [2024-12-06 11:51:14.601645] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:16.854 [2024-12-06 11:51:14.601854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77871 ] 00:08:17.114 [2024-12-06 11:51:14.725668] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.114 [2024-12-06 11:51:14.767765] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.114 [2024-12-06 11:51:14.810015] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.114 [2024-12-06 11:51:14.810120] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.683 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:17.683 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:17.683 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:17.683 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:17.683 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.683 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.683 BaseBdev1_malloc 00:08:17.683 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.683 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:17.683 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.683 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.942 true 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.942 [2024-12-06 11:51:15.447669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:17.942 [2024-12-06 11:51:15.447773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.942 [2024-12-06 11:51:15.447813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:17.942 [2024-12-06 11:51:15.447840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.942 [2024-12-06 11:51:15.449873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.942 [2024-12-06 11:51:15.449954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:17.942 BaseBdev1 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.942 BaseBdev2_malloc 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.942 true 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.942 [2024-12-06 11:51:15.505970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:17.942 [2024-12-06 11:51:15.506112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.942 [2024-12-06 11:51:15.506177] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:17.942 [2024-12-06 11:51:15.506243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.942 [2024-12-06 11:51:15.509445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.942 [2024-12-06 11:51:15.509547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:17.942 BaseBdev2 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.942 BaseBdev3_malloc 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.942 true 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.942 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.942 [2024-12-06 11:51:15.546774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:17.942 [2024-12-06 11:51:15.546870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.942 [2024-12-06 11:51:15.546891] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:17.942 [2024-12-06 11:51:15.546899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.942 [2024-12-06 11:51:15.548902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.943 [2024-12-06 11:51:15.548937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:17.943 BaseBdev3 00:08:17.943 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.943 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:17.943 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.943 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.943 [2024-12-06 11:51:15.558828] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:17.943 [2024-12-06 11:51:15.560649] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.943 [2024-12-06 11:51:15.560757] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:17.943 [2024-12-06 11:51:15.560956] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:17.943 [2024-12-06 11:51:15.561001] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:17.943 [2024-12-06 11:51:15.561261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:17.943 [2024-12-06 11:51:15.561428] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:17.943 [2024-12-06 11:51:15.561474] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:17.943 [2024-12-06 11:51:15.561626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.943 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.943 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:17.943 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:17.943 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.943 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:17.943 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.943 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.943 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.943 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.943 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.943 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.943 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.943 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:17.943 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.943 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.943 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.943 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.943 "name": "raid_bdev1", 00:08:17.943 "uuid": "991850c7-3343-400a-b371-e0f0165d54cb", 00:08:17.943 "strip_size_kb": 64, 00:08:17.943 "state": "online", 00:08:17.943 "raid_level": "concat", 00:08:17.943 "superblock": true, 00:08:17.943 "num_base_bdevs": 3, 00:08:17.943 "num_base_bdevs_discovered": 3, 00:08:17.943 "num_base_bdevs_operational": 3, 00:08:17.943 "base_bdevs_list": [ 00:08:17.943 { 00:08:17.943 "name": "BaseBdev1", 00:08:17.943 "uuid": "311ef46d-0551-58e7-b2da-f7d09ddf10ed", 00:08:17.943 "is_configured": true, 00:08:17.943 "data_offset": 2048, 00:08:17.943 "data_size": 63488 00:08:17.943 }, 00:08:17.943 { 00:08:17.943 "name": "BaseBdev2", 00:08:17.943 "uuid": "465d4af2-4919-5e23-b458-79427724f8e9", 00:08:17.943 "is_configured": true, 00:08:17.943 "data_offset": 2048, 00:08:17.943 "data_size": 63488 00:08:17.943 }, 00:08:17.943 { 00:08:17.943 "name": "BaseBdev3", 00:08:17.943 "uuid": "312ede33-71c0-5563-bbb6-6cde54ee326c", 00:08:17.943 "is_configured": true, 00:08:17.943 "data_offset": 2048, 00:08:17.943 "data_size": 63488 00:08:17.943 } 00:08:17.943 ] 00:08:17.943 }' 00:08:17.943 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.943 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.512 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:18.512 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:18.512 [2024-12-06 11:51:16.066295] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:19.451 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:19.451 11:51:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.451 11:51:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.451 11:51:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.451 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:19.451 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:19.451 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:19.451 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:19.451 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.451 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:19.451 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:19.451 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.451 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.451 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.451 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.451 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.451 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.451 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.451 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.451 11:51:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.451 11:51:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.451 11:51:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.451 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.451 "name": "raid_bdev1", 00:08:19.451 "uuid": "991850c7-3343-400a-b371-e0f0165d54cb", 00:08:19.451 "strip_size_kb": 64, 00:08:19.451 "state": "online", 00:08:19.451 "raid_level": "concat", 00:08:19.451 "superblock": true, 00:08:19.451 "num_base_bdevs": 3, 00:08:19.451 "num_base_bdevs_discovered": 3, 00:08:19.451 "num_base_bdevs_operational": 3, 00:08:19.451 "base_bdevs_list": [ 00:08:19.451 { 00:08:19.451 "name": "BaseBdev1", 00:08:19.451 "uuid": "311ef46d-0551-58e7-b2da-f7d09ddf10ed", 00:08:19.451 "is_configured": true, 00:08:19.451 "data_offset": 2048, 00:08:19.451 "data_size": 63488 00:08:19.451 }, 00:08:19.451 { 00:08:19.451 "name": "BaseBdev2", 00:08:19.451 "uuid": "465d4af2-4919-5e23-b458-79427724f8e9", 00:08:19.451 "is_configured": true, 00:08:19.451 "data_offset": 2048, 00:08:19.451 "data_size": 63488 00:08:19.451 }, 00:08:19.451 { 00:08:19.451 "name": "BaseBdev3", 00:08:19.451 "uuid": "312ede33-71c0-5563-bbb6-6cde54ee326c", 00:08:19.451 "is_configured": true, 00:08:19.451 "data_offset": 2048, 00:08:19.451 "data_size": 63488 00:08:19.451 } 00:08:19.451 ] 00:08:19.451 }' 00:08:19.451 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.451 11:51:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.711 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:19.711 11:51:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.711 11:51:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.711 [2024-12-06 11:51:17.442195] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:19.711 [2024-12-06 11:51:17.442227] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:19.711 [2024-12-06 11:51:17.444654] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:19.711 [2024-12-06 11:51:17.444709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.711 [2024-12-06 11:51:17.444742] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:19.711 [2024-12-06 11:51:17.444758] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:19.711 { 00:08:19.711 "results": [ 00:08:19.711 { 00:08:19.711 "job": "raid_bdev1", 00:08:19.711 "core_mask": "0x1", 00:08:19.711 "workload": "randrw", 00:08:19.711 "percentage": 50, 00:08:19.711 "status": "finished", 00:08:19.711 "queue_depth": 1, 00:08:19.711 "io_size": 131072, 00:08:19.711 "runtime": 1.376688, 00:08:19.711 "iops": 17948.148019013748, 00:08:19.711 "mibps": 2243.5185023767185, 00:08:19.711 "io_failed": 1, 00:08:19.711 "io_timeout": 0, 00:08:19.711 "avg_latency_us": 77.15396068631938, 00:08:19.711 "min_latency_us": 24.370305676855896, 00:08:19.711 "max_latency_us": 1452.380786026201 00:08:19.711 } 00:08:19.711 ], 00:08:19.711 "core_count": 1 00:08:19.711 } 00:08:19.711 11:51:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.711 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 77871 00:08:19.711 11:51:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 77871 ']' 00:08:19.711 11:51:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 77871 00:08:19.711 11:51:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:19.711 11:51:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:19.711 11:51:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77871 00:08:19.971 killing process with pid 77871 00:08:19.971 11:51:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:19.971 11:51:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:19.971 11:51:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77871' 00:08:19.971 11:51:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 77871 00:08:19.971 [2024-12-06 11:51:17.474774] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:19.971 11:51:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 77871 00:08:19.971 [2024-12-06 11:51:17.500302] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:19.971 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:19.971 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CoKr6AoHST 00:08:19.971 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:20.232 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:20.232 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:20.232 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:20.232 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:20.232 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:20.232 ************************************ 00:08:20.232 END TEST raid_read_error_test 00:08:20.232 ************************************ 00:08:20.232 00:08:20.232 real 0m3.228s 00:08:20.232 user 0m4.064s 00:08:20.232 sys 0m0.463s 00:08:20.232 11:51:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.232 11:51:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.232 11:51:17 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:08:20.232 11:51:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:20.232 11:51:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.232 11:51:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:20.232 ************************************ 00:08:20.232 START TEST raid_write_error_test 00:08:20.232 ************************************ 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TEVuFQm18i 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78000 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78000 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 78000 ']' 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:20.232 11:51:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.232 [2024-12-06 11:51:17.904723] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:20.232 [2024-12-06 11:51:17.904918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78000 ] 00:08:20.492 [2024-12-06 11:51:18.048482] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.492 [2024-12-06 11:51:18.092503] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.492 [2024-12-06 11:51:18.133850] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.492 [2024-12-06 11:51:18.133881] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.062 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:21.062 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:21.062 11:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:21.062 11:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:21.062 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.062 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.062 BaseBdev1_malloc 00:08:21.062 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.062 11:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:21.062 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.063 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.063 true 00:08:21.063 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.063 11:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:21.063 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.063 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.063 [2024-12-06 11:51:18.746676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:21.063 [2024-12-06 11:51:18.746789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.063 [2024-12-06 11:51:18.746828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:21.063 [2024-12-06 11:51:18.746856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.063 [2024-12-06 11:51:18.748923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.063 [2024-12-06 11:51:18.749009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:21.063 BaseBdev1 00:08:21.063 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.063 11:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:21.063 11:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:21.063 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.063 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.063 BaseBdev2_malloc 00:08:21.063 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.063 11:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:21.063 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.063 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.063 true 00:08:21.063 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.063 11:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:21.063 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.063 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.063 [2024-12-06 11:51:18.802308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:21.063 [2024-12-06 11:51:18.802447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.063 [2024-12-06 11:51:18.802510] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:21.063 [2024-12-06 11:51:18.802563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.063 [2024-12-06 11:51:18.805821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.063 [2024-12-06 11:51:18.805921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:21.063 BaseBdev2 00:08:21.063 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.063 11:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:21.063 11:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:21.063 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.063 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.323 BaseBdev3_malloc 00:08:21.323 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.323 11:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:21.323 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.323 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.323 true 00:08:21.323 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.323 11:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:21.323 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.323 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.323 [2024-12-06 11:51:18.842969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:21.323 [2024-12-06 11:51:18.843065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.323 [2024-12-06 11:51:18.843099] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:21.323 [2024-12-06 11:51:18.843126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.323 [2024-12-06 11:51:18.845157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.323 [2024-12-06 11:51:18.845225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:21.323 BaseBdev3 00:08:21.323 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.323 11:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:21.323 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.324 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.324 [2024-12-06 11:51:18.855023] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:21.324 [2024-12-06 11:51:18.856854] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:21.324 [2024-12-06 11:51:18.856976] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:21.324 [2024-12-06 11:51:18.857184] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:21.324 [2024-12-06 11:51:18.857230] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:21.324 [2024-12-06 11:51:18.857494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:21.324 [2024-12-06 11:51:18.857670] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:21.324 [2024-12-06 11:51:18.857710] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:21.324 [2024-12-06 11:51:18.857860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.324 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.324 11:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:21.324 11:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:21.324 11:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.324 11:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:21.324 11:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.324 11:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.324 11:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.324 11:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.324 11:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.324 11:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.324 11:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.324 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.324 11:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:21.324 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.324 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.324 11:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.324 "name": "raid_bdev1", 00:08:21.324 "uuid": "8ba56413-4a9e-421a-8c10-ee7d2cdf677a", 00:08:21.324 "strip_size_kb": 64, 00:08:21.324 "state": "online", 00:08:21.324 "raid_level": "concat", 00:08:21.324 "superblock": true, 00:08:21.324 "num_base_bdevs": 3, 00:08:21.324 "num_base_bdevs_discovered": 3, 00:08:21.324 "num_base_bdevs_operational": 3, 00:08:21.324 "base_bdevs_list": [ 00:08:21.324 { 00:08:21.324 "name": "BaseBdev1", 00:08:21.324 "uuid": "874fe71d-9760-5b25-9abf-fb716df436e3", 00:08:21.324 "is_configured": true, 00:08:21.324 "data_offset": 2048, 00:08:21.324 "data_size": 63488 00:08:21.324 }, 00:08:21.324 { 00:08:21.324 "name": "BaseBdev2", 00:08:21.324 "uuid": "1665239d-dee5-52aa-991a-1ce220b2bf43", 00:08:21.324 "is_configured": true, 00:08:21.324 "data_offset": 2048, 00:08:21.324 "data_size": 63488 00:08:21.324 }, 00:08:21.324 { 00:08:21.324 "name": "BaseBdev3", 00:08:21.324 "uuid": "51211cb8-96ea-5631-8e35-503f02489959", 00:08:21.324 "is_configured": true, 00:08:21.324 "data_offset": 2048, 00:08:21.324 "data_size": 63488 00:08:21.324 } 00:08:21.324 ] 00:08:21.324 }' 00:08:21.324 11:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.324 11:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.584 11:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:21.584 11:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:21.844 [2024-12-06 11:51:19.378476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:22.783 11:51:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:22.783 11:51:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.783 11:51:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.783 11:51:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.783 11:51:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:22.783 11:51:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:22.783 11:51:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:22.783 11:51:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:22.783 11:51:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:22.783 11:51:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.783 11:51:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:22.783 11:51:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.783 11:51:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.783 11:51:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.783 11:51:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.783 11:51:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.783 11:51:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.783 11:51:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.783 11:51:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.783 11:51:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.783 11:51:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.783 11:51:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.783 11:51:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.783 "name": "raid_bdev1", 00:08:22.783 "uuid": "8ba56413-4a9e-421a-8c10-ee7d2cdf677a", 00:08:22.783 "strip_size_kb": 64, 00:08:22.783 "state": "online", 00:08:22.783 "raid_level": "concat", 00:08:22.783 "superblock": true, 00:08:22.783 "num_base_bdevs": 3, 00:08:22.783 "num_base_bdevs_discovered": 3, 00:08:22.783 "num_base_bdevs_operational": 3, 00:08:22.783 "base_bdevs_list": [ 00:08:22.783 { 00:08:22.783 "name": "BaseBdev1", 00:08:22.783 "uuid": "874fe71d-9760-5b25-9abf-fb716df436e3", 00:08:22.783 "is_configured": true, 00:08:22.783 "data_offset": 2048, 00:08:22.783 "data_size": 63488 00:08:22.784 }, 00:08:22.784 { 00:08:22.784 "name": "BaseBdev2", 00:08:22.784 "uuid": "1665239d-dee5-52aa-991a-1ce220b2bf43", 00:08:22.784 "is_configured": true, 00:08:22.784 "data_offset": 2048, 00:08:22.784 "data_size": 63488 00:08:22.784 }, 00:08:22.784 { 00:08:22.784 "name": "BaseBdev3", 00:08:22.784 "uuid": "51211cb8-96ea-5631-8e35-503f02489959", 00:08:22.784 "is_configured": true, 00:08:22.784 "data_offset": 2048, 00:08:22.784 "data_size": 63488 00:08:22.784 } 00:08:22.784 ] 00:08:22.784 }' 00:08:22.784 11:51:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.784 11:51:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.044 11:51:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:23.044 11:51:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.044 11:51:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.044 [2024-12-06 11:51:20.758290] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:23.044 [2024-12-06 11:51:20.758363] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:23.044 [2024-12-06 11:51:20.760839] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:23.044 { 00:08:23.044 "results": [ 00:08:23.044 { 00:08:23.044 "job": "raid_bdev1", 00:08:23.044 "core_mask": "0x1", 00:08:23.044 "workload": "randrw", 00:08:23.044 "percentage": 50, 00:08:23.044 "status": "finished", 00:08:23.044 "queue_depth": 1, 00:08:23.044 "io_size": 131072, 00:08:23.044 "runtime": 1.380742, 00:08:23.044 "iops": 17799.849646059873, 00:08:23.044 "mibps": 2224.981205757484, 00:08:23.044 "io_failed": 1, 00:08:23.044 "io_timeout": 0, 00:08:23.044 "avg_latency_us": 77.82098905507499, 00:08:23.044 "min_latency_us": 24.370305676855896, 00:08:23.044 "max_latency_us": 1359.3711790393013 00:08:23.044 } 00:08:23.044 ], 00:08:23.044 "core_count": 1 00:08:23.044 } 00:08:23.044 [2024-12-06 11:51:20.760950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.044 [2024-12-06 11:51:20.760990] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:23.044 [2024-12-06 11:51:20.761001] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:23.044 11:51:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.044 11:51:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78000 00:08:23.044 11:51:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 78000 ']' 00:08:23.044 11:51:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 78000 00:08:23.044 11:51:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:23.044 11:51:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:23.044 11:51:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78000 00:08:23.304 11:51:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:23.304 11:51:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:23.304 11:51:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78000' 00:08:23.304 killing process with pid 78000 00:08:23.304 11:51:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 78000 00:08:23.304 [2024-12-06 11:51:20.813507] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:23.304 11:51:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 78000 00:08:23.304 [2024-12-06 11:51:20.839160] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:23.564 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:23.564 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TEVuFQm18i 00:08:23.564 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:23.564 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:23.564 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:23.564 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:23.564 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:23.564 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:23.564 00:08:23.564 real 0m3.270s 00:08:23.564 user 0m4.134s 00:08:23.564 sys 0m0.500s 00:08:23.564 11:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.564 11:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.564 ************************************ 00:08:23.564 END TEST raid_write_error_test 00:08:23.564 ************************************ 00:08:23.564 11:51:21 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:23.564 11:51:21 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:08:23.564 11:51:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:23.564 11:51:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.564 11:51:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:23.564 ************************************ 00:08:23.564 START TEST raid_state_function_test 00:08:23.564 ************************************ 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78133 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78133' 00:08:23.564 Process raid pid: 78133 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78133 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 78133 ']' 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:23.564 11:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.564 [2024-12-06 11:51:21.237710] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:23.564 [2024-12-06 11:51:21.237873] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.823 [2024-12-06 11:51:21.367300] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.823 [2024-12-06 11:51:21.409675] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.823 [2024-12-06 11:51:21.450963] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.823 [2024-12-06 11:51:21.451075] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.404 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:24.404 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:24.404 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:24.404 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.404 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.404 [2024-12-06 11:51:22.063482] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:24.404 [2024-12-06 11:51:22.063588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:24.404 [2024-12-06 11:51:22.063619] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:24.404 [2024-12-06 11:51:22.063642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:24.404 [2024-12-06 11:51:22.063659] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:24.404 [2024-12-06 11:51:22.063683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:24.404 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.404 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:24.404 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.404 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.404 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:24.404 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:24.404 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.404 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.404 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.404 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.404 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.404 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.404 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.404 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.404 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.404 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.404 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.404 "name": "Existed_Raid", 00:08:24.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.404 "strip_size_kb": 0, 00:08:24.404 "state": "configuring", 00:08:24.404 "raid_level": "raid1", 00:08:24.404 "superblock": false, 00:08:24.404 "num_base_bdevs": 3, 00:08:24.404 "num_base_bdevs_discovered": 0, 00:08:24.404 "num_base_bdevs_operational": 3, 00:08:24.404 "base_bdevs_list": [ 00:08:24.404 { 00:08:24.404 "name": "BaseBdev1", 00:08:24.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.404 "is_configured": false, 00:08:24.404 "data_offset": 0, 00:08:24.404 "data_size": 0 00:08:24.404 }, 00:08:24.404 { 00:08:24.404 "name": "BaseBdev2", 00:08:24.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.404 "is_configured": false, 00:08:24.404 "data_offset": 0, 00:08:24.404 "data_size": 0 00:08:24.404 }, 00:08:24.404 { 00:08:24.404 "name": "BaseBdev3", 00:08:24.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.404 "is_configured": false, 00:08:24.404 "data_offset": 0, 00:08:24.404 "data_size": 0 00:08:24.404 } 00:08:24.404 ] 00:08:24.404 }' 00:08:24.404 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.404 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.993 [2024-12-06 11:51:22.466701] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:24.993 [2024-12-06 11:51:22.466776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.993 [2024-12-06 11:51:22.478699] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:24.993 [2024-12-06 11:51:22.478740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:24.993 [2024-12-06 11:51:22.478748] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:24.993 [2024-12-06 11:51:22.478756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:24.993 [2024-12-06 11:51:22.478762] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:24.993 [2024-12-06 11:51:22.478770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.993 [2024-12-06 11:51:22.499307] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:24.993 BaseBdev1 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.993 [ 00:08:24.993 { 00:08:24.993 "name": "BaseBdev1", 00:08:24.993 "aliases": [ 00:08:24.993 "c762fd5f-95b1-4745-919e-1b811779f26d" 00:08:24.993 ], 00:08:24.993 "product_name": "Malloc disk", 00:08:24.993 "block_size": 512, 00:08:24.993 "num_blocks": 65536, 00:08:24.993 "uuid": "c762fd5f-95b1-4745-919e-1b811779f26d", 00:08:24.993 "assigned_rate_limits": { 00:08:24.993 "rw_ios_per_sec": 0, 00:08:24.993 "rw_mbytes_per_sec": 0, 00:08:24.993 "r_mbytes_per_sec": 0, 00:08:24.993 "w_mbytes_per_sec": 0 00:08:24.993 }, 00:08:24.993 "claimed": true, 00:08:24.993 "claim_type": "exclusive_write", 00:08:24.993 "zoned": false, 00:08:24.993 "supported_io_types": { 00:08:24.993 "read": true, 00:08:24.993 "write": true, 00:08:24.993 "unmap": true, 00:08:24.993 "flush": true, 00:08:24.993 "reset": true, 00:08:24.993 "nvme_admin": false, 00:08:24.993 "nvme_io": false, 00:08:24.993 "nvme_io_md": false, 00:08:24.993 "write_zeroes": true, 00:08:24.993 "zcopy": true, 00:08:24.993 "get_zone_info": false, 00:08:24.993 "zone_management": false, 00:08:24.993 "zone_append": false, 00:08:24.993 "compare": false, 00:08:24.993 "compare_and_write": false, 00:08:24.993 "abort": true, 00:08:24.993 "seek_hole": false, 00:08:24.993 "seek_data": false, 00:08:24.993 "copy": true, 00:08:24.993 "nvme_iov_md": false 00:08:24.993 }, 00:08:24.993 "memory_domains": [ 00:08:24.993 { 00:08:24.993 "dma_device_id": "system", 00:08:24.993 "dma_device_type": 1 00:08:24.993 }, 00:08:24.993 { 00:08:24.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.993 "dma_device_type": 2 00:08:24.993 } 00:08:24.993 ], 00:08:24.993 "driver_specific": {} 00:08:24.993 } 00:08:24.993 ] 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.993 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.994 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.994 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.994 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.994 "name": "Existed_Raid", 00:08:24.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.994 "strip_size_kb": 0, 00:08:24.994 "state": "configuring", 00:08:24.994 "raid_level": "raid1", 00:08:24.994 "superblock": false, 00:08:24.994 "num_base_bdevs": 3, 00:08:24.994 "num_base_bdevs_discovered": 1, 00:08:24.994 "num_base_bdevs_operational": 3, 00:08:24.994 "base_bdevs_list": [ 00:08:24.994 { 00:08:24.994 "name": "BaseBdev1", 00:08:24.994 "uuid": "c762fd5f-95b1-4745-919e-1b811779f26d", 00:08:24.994 "is_configured": true, 00:08:24.994 "data_offset": 0, 00:08:24.994 "data_size": 65536 00:08:24.994 }, 00:08:24.994 { 00:08:24.994 "name": "BaseBdev2", 00:08:24.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.994 "is_configured": false, 00:08:24.994 "data_offset": 0, 00:08:24.994 "data_size": 0 00:08:24.994 }, 00:08:24.994 { 00:08:24.994 "name": "BaseBdev3", 00:08:24.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.994 "is_configured": false, 00:08:24.994 "data_offset": 0, 00:08:24.994 "data_size": 0 00:08:24.994 } 00:08:24.994 ] 00:08:24.994 }' 00:08:24.994 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.994 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.254 11:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:25.254 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.254 11:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.254 [2024-12-06 11:51:23.002428] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:25.254 [2024-12-06 11:51:23.002519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:25.254 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.254 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:25.254 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.515 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.515 [2024-12-06 11:51:23.014460] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:25.515 [2024-12-06 11:51:23.016270] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:25.515 [2024-12-06 11:51:23.016314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:25.515 [2024-12-06 11:51:23.016324] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:25.515 [2024-12-06 11:51:23.016334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:25.515 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.515 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:25.515 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:25.515 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:25.515 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.515 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.515 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:25.515 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:25.515 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.515 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.515 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.515 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.515 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.515 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.515 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.515 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.515 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.515 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.515 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.515 "name": "Existed_Raid", 00:08:25.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.515 "strip_size_kb": 0, 00:08:25.515 "state": "configuring", 00:08:25.515 "raid_level": "raid1", 00:08:25.515 "superblock": false, 00:08:25.515 "num_base_bdevs": 3, 00:08:25.515 "num_base_bdevs_discovered": 1, 00:08:25.515 "num_base_bdevs_operational": 3, 00:08:25.515 "base_bdevs_list": [ 00:08:25.515 { 00:08:25.515 "name": "BaseBdev1", 00:08:25.515 "uuid": "c762fd5f-95b1-4745-919e-1b811779f26d", 00:08:25.515 "is_configured": true, 00:08:25.515 "data_offset": 0, 00:08:25.515 "data_size": 65536 00:08:25.515 }, 00:08:25.515 { 00:08:25.515 "name": "BaseBdev2", 00:08:25.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.515 "is_configured": false, 00:08:25.515 "data_offset": 0, 00:08:25.515 "data_size": 0 00:08:25.515 }, 00:08:25.515 { 00:08:25.515 "name": "BaseBdev3", 00:08:25.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.515 "is_configured": false, 00:08:25.515 "data_offset": 0, 00:08:25.515 "data_size": 0 00:08:25.515 } 00:08:25.515 ] 00:08:25.515 }' 00:08:25.515 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.515 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.776 [2024-12-06 11:51:23.445963] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:25.776 BaseBdev2 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.776 [ 00:08:25.776 { 00:08:25.776 "name": "BaseBdev2", 00:08:25.776 "aliases": [ 00:08:25.776 "b552538a-14cc-4ae7-8cd7-fc9924fea77a" 00:08:25.776 ], 00:08:25.776 "product_name": "Malloc disk", 00:08:25.776 "block_size": 512, 00:08:25.776 "num_blocks": 65536, 00:08:25.776 "uuid": "b552538a-14cc-4ae7-8cd7-fc9924fea77a", 00:08:25.776 "assigned_rate_limits": { 00:08:25.776 "rw_ios_per_sec": 0, 00:08:25.776 "rw_mbytes_per_sec": 0, 00:08:25.776 "r_mbytes_per_sec": 0, 00:08:25.776 "w_mbytes_per_sec": 0 00:08:25.776 }, 00:08:25.776 "claimed": true, 00:08:25.776 "claim_type": "exclusive_write", 00:08:25.776 "zoned": false, 00:08:25.776 "supported_io_types": { 00:08:25.776 "read": true, 00:08:25.776 "write": true, 00:08:25.776 "unmap": true, 00:08:25.776 "flush": true, 00:08:25.776 "reset": true, 00:08:25.776 "nvme_admin": false, 00:08:25.776 "nvme_io": false, 00:08:25.776 "nvme_io_md": false, 00:08:25.776 "write_zeroes": true, 00:08:25.776 "zcopy": true, 00:08:25.776 "get_zone_info": false, 00:08:25.776 "zone_management": false, 00:08:25.776 "zone_append": false, 00:08:25.776 "compare": false, 00:08:25.776 "compare_and_write": false, 00:08:25.776 "abort": true, 00:08:25.776 "seek_hole": false, 00:08:25.776 "seek_data": false, 00:08:25.776 "copy": true, 00:08:25.776 "nvme_iov_md": false 00:08:25.776 }, 00:08:25.776 "memory_domains": [ 00:08:25.776 { 00:08:25.776 "dma_device_id": "system", 00:08:25.776 "dma_device_type": 1 00:08:25.776 }, 00:08:25.776 { 00:08:25.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.776 "dma_device_type": 2 00:08:25.776 } 00:08:25.776 ], 00:08:25.776 "driver_specific": {} 00:08:25.776 } 00:08:25.776 ] 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.776 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.036 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.036 "name": "Existed_Raid", 00:08:26.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.036 "strip_size_kb": 0, 00:08:26.036 "state": "configuring", 00:08:26.036 "raid_level": "raid1", 00:08:26.036 "superblock": false, 00:08:26.036 "num_base_bdevs": 3, 00:08:26.036 "num_base_bdevs_discovered": 2, 00:08:26.036 "num_base_bdevs_operational": 3, 00:08:26.036 "base_bdevs_list": [ 00:08:26.036 { 00:08:26.036 "name": "BaseBdev1", 00:08:26.036 "uuid": "c762fd5f-95b1-4745-919e-1b811779f26d", 00:08:26.036 "is_configured": true, 00:08:26.036 "data_offset": 0, 00:08:26.036 "data_size": 65536 00:08:26.036 }, 00:08:26.036 { 00:08:26.036 "name": "BaseBdev2", 00:08:26.036 "uuid": "b552538a-14cc-4ae7-8cd7-fc9924fea77a", 00:08:26.036 "is_configured": true, 00:08:26.036 "data_offset": 0, 00:08:26.036 "data_size": 65536 00:08:26.036 }, 00:08:26.036 { 00:08:26.036 "name": "BaseBdev3", 00:08:26.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.036 "is_configured": false, 00:08:26.036 "data_offset": 0, 00:08:26.036 "data_size": 0 00:08:26.036 } 00:08:26.036 ] 00:08:26.036 }' 00:08:26.036 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.036 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.297 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:26.297 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.297 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.297 [2024-12-06 11:51:23.939935] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:26.297 [2024-12-06 11:51:23.939980] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:26.297 [2024-12-06 11:51:23.939990] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:26.297 [2024-12-06 11:51:23.940251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:26.297 [2024-12-06 11:51:23.940390] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:26.297 [2024-12-06 11:51:23.940398] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:26.297 [2024-12-06 11:51:23.940589] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.297 BaseBdev3 00:08:26.297 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.297 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:26.297 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:26.297 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:26.297 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:26.297 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:26.297 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:26.297 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:26.297 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.297 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.297 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.297 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:26.297 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.297 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.297 [ 00:08:26.297 { 00:08:26.297 "name": "BaseBdev3", 00:08:26.297 "aliases": [ 00:08:26.297 "17207513-cd83-4d91-a758-30382cc5acd9" 00:08:26.298 ], 00:08:26.298 "product_name": "Malloc disk", 00:08:26.298 "block_size": 512, 00:08:26.298 "num_blocks": 65536, 00:08:26.298 "uuid": "17207513-cd83-4d91-a758-30382cc5acd9", 00:08:26.298 "assigned_rate_limits": { 00:08:26.298 "rw_ios_per_sec": 0, 00:08:26.298 "rw_mbytes_per_sec": 0, 00:08:26.298 "r_mbytes_per_sec": 0, 00:08:26.298 "w_mbytes_per_sec": 0 00:08:26.298 }, 00:08:26.298 "claimed": true, 00:08:26.298 "claim_type": "exclusive_write", 00:08:26.298 "zoned": false, 00:08:26.298 "supported_io_types": { 00:08:26.298 "read": true, 00:08:26.298 "write": true, 00:08:26.298 "unmap": true, 00:08:26.298 "flush": true, 00:08:26.298 "reset": true, 00:08:26.298 "nvme_admin": false, 00:08:26.298 "nvme_io": false, 00:08:26.298 "nvme_io_md": false, 00:08:26.298 "write_zeroes": true, 00:08:26.298 "zcopy": true, 00:08:26.298 "get_zone_info": false, 00:08:26.298 "zone_management": false, 00:08:26.298 "zone_append": false, 00:08:26.298 "compare": false, 00:08:26.298 "compare_and_write": false, 00:08:26.298 "abort": true, 00:08:26.298 "seek_hole": false, 00:08:26.298 "seek_data": false, 00:08:26.298 "copy": true, 00:08:26.298 "nvme_iov_md": false 00:08:26.298 }, 00:08:26.298 "memory_domains": [ 00:08:26.298 { 00:08:26.298 "dma_device_id": "system", 00:08:26.298 "dma_device_type": 1 00:08:26.298 }, 00:08:26.298 { 00:08:26.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.298 "dma_device_type": 2 00:08:26.298 } 00:08:26.298 ], 00:08:26.298 "driver_specific": {} 00:08:26.298 } 00:08:26.298 ] 00:08:26.298 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.298 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:26.298 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:26.298 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:26.298 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:26.298 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.298 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.298 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:26.298 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:26.298 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.298 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.298 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.298 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.298 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.298 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.298 11:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.298 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.298 11:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.298 11:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.298 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.298 "name": "Existed_Raid", 00:08:26.298 "uuid": "573d85aa-8f22-4740-be08-416963281e5c", 00:08:26.298 "strip_size_kb": 0, 00:08:26.298 "state": "online", 00:08:26.298 "raid_level": "raid1", 00:08:26.298 "superblock": false, 00:08:26.298 "num_base_bdevs": 3, 00:08:26.298 "num_base_bdevs_discovered": 3, 00:08:26.298 "num_base_bdevs_operational": 3, 00:08:26.298 "base_bdevs_list": [ 00:08:26.298 { 00:08:26.298 "name": "BaseBdev1", 00:08:26.298 "uuid": "c762fd5f-95b1-4745-919e-1b811779f26d", 00:08:26.298 "is_configured": true, 00:08:26.298 "data_offset": 0, 00:08:26.298 "data_size": 65536 00:08:26.298 }, 00:08:26.298 { 00:08:26.298 "name": "BaseBdev2", 00:08:26.298 "uuid": "b552538a-14cc-4ae7-8cd7-fc9924fea77a", 00:08:26.298 "is_configured": true, 00:08:26.298 "data_offset": 0, 00:08:26.298 "data_size": 65536 00:08:26.298 }, 00:08:26.298 { 00:08:26.298 "name": "BaseBdev3", 00:08:26.298 "uuid": "17207513-cd83-4d91-a758-30382cc5acd9", 00:08:26.298 "is_configured": true, 00:08:26.298 "data_offset": 0, 00:08:26.298 "data_size": 65536 00:08:26.298 } 00:08:26.298 ] 00:08:26.298 }' 00:08:26.298 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.298 11:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.868 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:26.868 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:26.868 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:26.868 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:26.868 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:26.868 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:26.868 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:26.868 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:26.868 11:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.868 11:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.868 [2024-12-06 11:51:24.427461] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:26.868 11:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.868 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:26.868 "name": "Existed_Raid", 00:08:26.868 "aliases": [ 00:08:26.868 "573d85aa-8f22-4740-be08-416963281e5c" 00:08:26.868 ], 00:08:26.868 "product_name": "Raid Volume", 00:08:26.868 "block_size": 512, 00:08:26.868 "num_blocks": 65536, 00:08:26.868 "uuid": "573d85aa-8f22-4740-be08-416963281e5c", 00:08:26.868 "assigned_rate_limits": { 00:08:26.868 "rw_ios_per_sec": 0, 00:08:26.868 "rw_mbytes_per_sec": 0, 00:08:26.868 "r_mbytes_per_sec": 0, 00:08:26.868 "w_mbytes_per_sec": 0 00:08:26.868 }, 00:08:26.868 "claimed": false, 00:08:26.868 "zoned": false, 00:08:26.868 "supported_io_types": { 00:08:26.868 "read": true, 00:08:26.868 "write": true, 00:08:26.868 "unmap": false, 00:08:26.868 "flush": false, 00:08:26.868 "reset": true, 00:08:26.868 "nvme_admin": false, 00:08:26.868 "nvme_io": false, 00:08:26.868 "nvme_io_md": false, 00:08:26.868 "write_zeroes": true, 00:08:26.868 "zcopy": false, 00:08:26.868 "get_zone_info": false, 00:08:26.868 "zone_management": false, 00:08:26.868 "zone_append": false, 00:08:26.868 "compare": false, 00:08:26.868 "compare_and_write": false, 00:08:26.868 "abort": false, 00:08:26.868 "seek_hole": false, 00:08:26.868 "seek_data": false, 00:08:26.868 "copy": false, 00:08:26.868 "nvme_iov_md": false 00:08:26.868 }, 00:08:26.868 "memory_domains": [ 00:08:26.868 { 00:08:26.868 "dma_device_id": "system", 00:08:26.868 "dma_device_type": 1 00:08:26.868 }, 00:08:26.868 { 00:08:26.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.868 "dma_device_type": 2 00:08:26.868 }, 00:08:26.868 { 00:08:26.868 "dma_device_id": "system", 00:08:26.868 "dma_device_type": 1 00:08:26.868 }, 00:08:26.868 { 00:08:26.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.868 "dma_device_type": 2 00:08:26.868 }, 00:08:26.868 { 00:08:26.868 "dma_device_id": "system", 00:08:26.869 "dma_device_type": 1 00:08:26.869 }, 00:08:26.869 { 00:08:26.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.869 "dma_device_type": 2 00:08:26.869 } 00:08:26.869 ], 00:08:26.869 "driver_specific": { 00:08:26.869 "raid": { 00:08:26.869 "uuid": "573d85aa-8f22-4740-be08-416963281e5c", 00:08:26.869 "strip_size_kb": 0, 00:08:26.869 "state": "online", 00:08:26.869 "raid_level": "raid1", 00:08:26.869 "superblock": false, 00:08:26.869 "num_base_bdevs": 3, 00:08:26.869 "num_base_bdevs_discovered": 3, 00:08:26.869 "num_base_bdevs_operational": 3, 00:08:26.869 "base_bdevs_list": [ 00:08:26.869 { 00:08:26.869 "name": "BaseBdev1", 00:08:26.869 "uuid": "c762fd5f-95b1-4745-919e-1b811779f26d", 00:08:26.869 "is_configured": true, 00:08:26.869 "data_offset": 0, 00:08:26.869 "data_size": 65536 00:08:26.869 }, 00:08:26.869 { 00:08:26.869 "name": "BaseBdev2", 00:08:26.869 "uuid": "b552538a-14cc-4ae7-8cd7-fc9924fea77a", 00:08:26.869 "is_configured": true, 00:08:26.869 "data_offset": 0, 00:08:26.869 "data_size": 65536 00:08:26.869 }, 00:08:26.869 { 00:08:26.869 "name": "BaseBdev3", 00:08:26.869 "uuid": "17207513-cd83-4d91-a758-30382cc5acd9", 00:08:26.869 "is_configured": true, 00:08:26.869 "data_offset": 0, 00:08:26.869 "data_size": 65536 00:08:26.869 } 00:08:26.869 ] 00:08:26.869 } 00:08:26.869 } 00:08:26.869 }' 00:08:26.869 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:26.869 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:26.869 BaseBdev2 00:08:26.869 BaseBdev3' 00:08:26.869 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.869 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:26.869 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.869 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:26.869 11:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.869 11:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.869 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.869 11:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.869 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.869 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.869 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.869 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:26.869 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.869 11:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.869 11:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.869 11:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.869 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.869 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.869 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.129 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.130 [2024-12-06 11:51:24.654827] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.130 "name": "Existed_Raid", 00:08:27.130 "uuid": "573d85aa-8f22-4740-be08-416963281e5c", 00:08:27.130 "strip_size_kb": 0, 00:08:27.130 "state": "online", 00:08:27.130 "raid_level": "raid1", 00:08:27.130 "superblock": false, 00:08:27.130 "num_base_bdevs": 3, 00:08:27.130 "num_base_bdevs_discovered": 2, 00:08:27.130 "num_base_bdevs_operational": 2, 00:08:27.130 "base_bdevs_list": [ 00:08:27.130 { 00:08:27.130 "name": null, 00:08:27.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.130 "is_configured": false, 00:08:27.130 "data_offset": 0, 00:08:27.130 "data_size": 65536 00:08:27.130 }, 00:08:27.130 { 00:08:27.130 "name": "BaseBdev2", 00:08:27.130 "uuid": "b552538a-14cc-4ae7-8cd7-fc9924fea77a", 00:08:27.130 "is_configured": true, 00:08:27.130 "data_offset": 0, 00:08:27.130 "data_size": 65536 00:08:27.130 }, 00:08:27.130 { 00:08:27.130 "name": "BaseBdev3", 00:08:27.130 "uuid": "17207513-cd83-4d91-a758-30382cc5acd9", 00:08:27.130 "is_configured": true, 00:08:27.130 "data_offset": 0, 00:08:27.130 "data_size": 65536 00:08:27.130 } 00:08:27.130 ] 00:08:27.130 }' 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.130 11:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.390 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:27.390 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:27.390 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:27.390 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.391 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.391 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.651 [2024-12-06 11:51:25.173114] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.651 [2024-12-06 11:51:25.220129] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:27.651 [2024-12-06 11:51:25.220213] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:27.651 [2024-12-06 11:51:25.231530] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.651 [2024-12-06 11:51:25.231572] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:27.651 [2024-12-06 11:51:25.231586] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.651 BaseBdev2 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.651 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.652 [ 00:08:27.652 { 00:08:27.652 "name": "BaseBdev2", 00:08:27.652 "aliases": [ 00:08:27.652 "8d5bc828-daab-4008-8c3b-7f99fb700f38" 00:08:27.652 ], 00:08:27.652 "product_name": "Malloc disk", 00:08:27.652 "block_size": 512, 00:08:27.652 "num_blocks": 65536, 00:08:27.652 "uuid": "8d5bc828-daab-4008-8c3b-7f99fb700f38", 00:08:27.652 "assigned_rate_limits": { 00:08:27.652 "rw_ios_per_sec": 0, 00:08:27.652 "rw_mbytes_per_sec": 0, 00:08:27.652 "r_mbytes_per_sec": 0, 00:08:27.652 "w_mbytes_per_sec": 0 00:08:27.652 }, 00:08:27.652 "claimed": false, 00:08:27.652 "zoned": false, 00:08:27.652 "supported_io_types": { 00:08:27.652 "read": true, 00:08:27.652 "write": true, 00:08:27.652 "unmap": true, 00:08:27.652 "flush": true, 00:08:27.652 "reset": true, 00:08:27.652 "nvme_admin": false, 00:08:27.652 "nvme_io": false, 00:08:27.652 "nvme_io_md": false, 00:08:27.652 "write_zeroes": true, 00:08:27.652 "zcopy": true, 00:08:27.652 "get_zone_info": false, 00:08:27.652 "zone_management": false, 00:08:27.652 "zone_append": false, 00:08:27.652 "compare": false, 00:08:27.652 "compare_and_write": false, 00:08:27.652 "abort": true, 00:08:27.652 "seek_hole": false, 00:08:27.652 "seek_data": false, 00:08:27.652 "copy": true, 00:08:27.652 "nvme_iov_md": false 00:08:27.652 }, 00:08:27.652 "memory_domains": [ 00:08:27.652 { 00:08:27.652 "dma_device_id": "system", 00:08:27.652 "dma_device_type": 1 00:08:27.652 }, 00:08:27.652 { 00:08:27.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.652 "dma_device_type": 2 00:08:27.652 } 00:08:27.652 ], 00:08:27.652 "driver_specific": {} 00:08:27.652 } 00:08:27.652 ] 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.652 BaseBdev3 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.652 [ 00:08:27.652 { 00:08:27.652 "name": "BaseBdev3", 00:08:27.652 "aliases": [ 00:08:27.652 "001ae9ae-04d8-40d0-8021-c18f5d921c88" 00:08:27.652 ], 00:08:27.652 "product_name": "Malloc disk", 00:08:27.652 "block_size": 512, 00:08:27.652 "num_blocks": 65536, 00:08:27.652 "uuid": "001ae9ae-04d8-40d0-8021-c18f5d921c88", 00:08:27.652 "assigned_rate_limits": { 00:08:27.652 "rw_ios_per_sec": 0, 00:08:27.652 "rw_mbytes_per_sec": 0, 00:08:27.652 "r_mbytes_per_sec": 0, 00:08:27.652 "w_mbytes_per_sec": 0 00:08:27.652 }, 00:08:27.652 "claimed": false, 00:08:27.652 "zoned": false, 00:08:27.652 "supported_io_types": { 00:08:27.652 "read": true, 00:08:27.652 "write": true, 00:08:27.652 "unmap": true, 00:08:27.652 "flush": true, 00:08:27.652 "reset": true, 00:08:27.652 "nvme_admin": false, 00:08:27.652 "nvme_io": false, 00:08:27.652 "nvme_io_md": false, 00:08:27.652 "write_zeroes": true, 00:08:27.652 "zcopy": true, 00:08:27.652 "get_zone_info": false, 00:08:27.652 "zone_management": false, 00:08:27.652 "zone_append": false, 00:08:27.652 "compare": false, 00:08:27.652 "compare_and_write": false, 00:08:27.652 "abort": true, 00:08:27.652 "seek_hole": false, 00:08:27.652 "seek_data": false, 00:08:27.652 "copy": true, 00:08:27.652 "nvme_iov_md": false 00:08:27.652 }, 00:08:27.652 "memory_domains": [ 00:08:27.652 { 00:08:27.652 "dma_device_id": "system", 00:08:27.652 "dma_device_type": 1 00:08:27.652 }, 00:08:27.652 { 00:08:27.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.652 "dma_device_type": 2 00:08:27.652 } 00:08:27.652 ], 00:08:27.652 "driver_specific": {} 00:08:27.652 } 00:08:27.652 ] 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.652 [2024-12-06 11:51:25.391042] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:27.652 [2024-12-06 11:51:25.391087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:27.652 [2024-12-06 11:51:25.391104] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:27.652 [2024-12-06 11:51:25.392844] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.652 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.912 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.912 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.912 "name": "Existed_Raid", 00:08:27.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.912 "strip_size_kb": 0, 00:08:27.912 "state": "configuring", 00:08:27.912 "raid_level": "raid1", 00:08:27.912 "superblock": false, 00:08:27.912 "num_base_bdevs": 3, 00:08:27.912 "num_base_bdevs_discovered": 2, 00:08:27.912 "num_base_bdevs_operational": 3, 00:08:27.912 "base_bdevs_list": [ 00:08:27.912 { 00:08:27.912 "name": "BaseBdev1", 00:08:27.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.912 "is_configured": false, 00:08:27.912 "data_offset": 0, 00:08:27.912 "data_size": 0 00:08:27.912 }, 00:08:27.912 { 00:08:27.912 "name": "BaseBdev2", 00:08:27.912 "uuid": "8d5bc828-daab-4008-8c3b-7f99fb700f38", 00:08:27.912 "is_configured": true, 00:08:27.912 "data_offset": 0, 00:08:27.912 "data_size": 65536 00:08:27.912 }, 00:08:27.912 { 00:08:27.912 "name": "BaseBdev3", 00:08:27.912 "uuid": "001ae9ae-04d8-40d0-8021-c18f5d921c88", 00:08:27.912 "is_configured": true, 00:08:27.912 "data_offset": 0, 00:08:27.912 "data_size": 65536 00:08:27.912 } 00:08:27.912 ] 00:08:27.912 }' 00:08:27.912 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.912 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.171 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:28.171 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.171 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.171 [2024-12-06 11:51:25.790367] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:28.171 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.171 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:28.172 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.172 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.172 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:28.172 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:28.172 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.172 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.172 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.172 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.172 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.172 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.172 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.172 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.172 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.172 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.172 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.172 "name": "Existed_Raid", 00:08:28.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.172 "strip_size_kb": 0, 00:08:28.172 "state": "configuring", 00:08:28.172 "raid_level": "raid1", 00:08:28.172 "superblock": false, 00:08:28.172 "num_base_bdevs": 3, 00:08:28.172 "num_base_bdevs_discovered": 1, 00:08:28.172 "num_base_bdevs_operational": 3, 00:08:28.172 "base_bdevs_list": [ 00:08:28.172 { 00:08:28.172 "name": "BaseBdev1", 00:08:28.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.172 "is_configured": false, 00:08:28.172 "data_offset": 0, 00:08:28.172 "data_size": 0 00:08:28.172 }, 00:08:28.172 { 00:08:28.172 "name": null, 00:08:28.172 "uuid": "8d5bc828-daab-4008-8c3b-7f99fb700f38", 00:08:28.172 "is_configured": false, 00:08:28.172 "data_offset": 0, 00:08:28.172 "data_size": 65536 00:08:28.172 }, 00:08:28.172 { 00:08:28.172 "name": "BaseBdev3", 00:08:28.172 "uuid": "001ae9ae-04d8-40d0-8021-c18f5d921c88", 00:08:28.172 "is_configured": true, 00:08:28.172 "data_offset": 0, 00:08:28.172 "data_size": 65536 00:08:28.172 } 00:08:28.172 ] 00:08:28.172 }' 00:08:28.172 11:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.172 11:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.740 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.741 [2024-12-06 11:51:26.272513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:28.741 BaseBdev1 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.741 [ 00:08:28.741 { 00:08:28.741 "name": "BaseBdev1", 00:08:28.741 "aliases": [ 00:08:28.741 "f4dec256-ddbc-4817-b160-13c01ddb73ab" 00:08:28.741 ], 00:08:28.741 "product_name": "Malloc disk", 00:08:28.741 "block_size": 512, 00:08:28.741 "num_blocks": 65536, 00:08:28.741 "uuid": "f4dec256-ddbc-4817-b160-13c01ddb73ab", 00:08:28.741 "assigned_rate_limits": { 00:08:28.741 "rw_ios_per_sec": 0, 00:08:28.741 "rw_mbytes_per_sec": 0, 00:08:28.741 "r_mbytes_per_sec": 0, 00:08:28.741 "w_mbytes_per_sec": 0 00:08:28.741 }, 00:08:28.741 "claimed": true, 00:08:28.741 "claim_type": "exclusive_write", 00:08:28.741 "zoned": false, 00:08:28.741 "supported_io_types": { 00:08:28.741 "read": true, 00:08:28.741 "write": true, 00:08:28.741 "unmap": true, 00:08:28.741 "flush": true, 00:08:28.741 "reset": true, 00:08:28.741 "nvme_admin": false, 00:08:28.741 "nvme_io": false, 00:08:28.741 "nvme_io_md": false, 00:08:28.741 "write_zeroes": true, 00:08:28.741 "zcopy": true, 00:08:28.741 "get_zone_info": false, 00:08:28.741 "zone_management": false, 00:08:28.741 "zone_append": false, 00:08:28.741 "compare": false, 00:08:28.741 "compare_and_write": false, 00:08:28.741 "abort": true, 00:08:28.741 "seek_hole": false, 00:08:28.741 "seek_data": false, 00:08:28.741 "copy": true, 00:08:28.741 "nvme_iov_md": false 00:08:28.741 }, 00:08:28.741 "memory_domains": [ 00:08:28.741 { 00:08:28.741 "dma_device_id": "system", 00:08:28.741 "dma_device_type": 1 00:08:28.741 }, 00:08:28.741 { 00:08:28.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.741 "dma_device_type": 2 00:08:28.741 } 00:08:28.741 ], 00:08:28.741 "driver_specific": {} 00:08:28.741 } 00:08:28.741 ] 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.741 "name": "Existed_Raid", 00:08:28.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.741 "strip_size_kb": 0, 00:08:28.741 "state": "configuring", 00:08:28.741 "raid_level": "raid1", 00:08:28.741 "superblock": false, 00:08:28.741 "num_base_bdevs": 3, 00:08:28.741 "num_base_bdevs_discovered": 2, 00:08:28.741 "num_base_bdevs_operational": 3, 00:08:28.741 "base_bdevs_list": [ 00:08:28.741 { 00:08:28.741 "name": "BaseBdev1", 00:08:28.741 "uuid": "f4dec256-ddbc-4817-b160-13c01ddb73ab", 00:08:28.741 "is_configured": true, 00:08:28.741 "data_offset": 0, 00:08:28.741 "data_size": 65536 00:08:28.741 }, 00:08:28.741 { 00:08:28.741 "name": null, 00:08:28.741 "uuid": "8d5bc828-daab-4008-8c3b-7f99fb700f38", 00:08:28.741 "is_configured": false, 00:08:28.741 "data_offset": 0, 00:08:28.741 "data_size": 65536 00:08:28.741 }, 00:08:28.741 { 00:08:28.741 "name": "BaseBdev3", 00:08:28.741 "uuid": "001ae9ae-04d8-40d0-8021-c18f5d921c88", 00:08:28.741 "is_configured": true, 00:08:28.741 "data_offset": 0, 00:08:28.741 "data_size": 65536 00:08:28.741 } 00:08:28.741 ] 00:08:28.741 }' 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.741 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.000 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:29.000 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.000 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.000 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.260 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.260 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:29.260 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:29.260 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.260 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.260 [2024-12-06 11:51:26.775696] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:29.260 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.260 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:29.260 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.260 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.260 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:29.260 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:29.260 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.260 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.260 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.260 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.260 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.260 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.260 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.260 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.260 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.260 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.260 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.260 "name": "Existed_Raid", 00:08:29.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.260 "strip_size_kb": 0, 00:08:29.260 "state": "configuring", 00:08:29.260 "raid_level": "raid1", 00:08:29.260 "superblock": false, 00:08:29.260 "num_base_bdevs": 3, 00:08:29.260 "num_base_bdevs_discovered": 1, 00:08:29.260 "num_base_bdevs_operational": 3, 00:08:29.260 "base_bdevs_list": [ 00:08:29.260 { 00:08:29.260 "name": "BaseBdev1", 00:08:29.260 "uuid": "f4dec256-ddbc-4817-b160-13c01ddb73ab", 00:08:29.260 "is_configured": true, 00:08:29.260 "data_offset": 0, 00:08:29.260 "data_size": 65536 00:08:29.260 }, 00:08:29.260 { 00:08:29.260 "name": null, 00:08:29.260 "uuid": "8d5bc828-daab-4008-8c3b-7f99fb700f38", 00:08:29.260 "is_configured": false, 00:08:29.260 "data_offset": 0, 00:08:29.260 "data_size": 65536 00:08:29.260 }, 00:08:29.260 { 00:08:29.260 "name": null, 00:08:29.260 "uuid": "001ae9ae-04d8-40d0-8021-c18f5d921c88", 00:08:29.260 "is_configured": false, 00:08:29.260 "data_offset": 0, 00:08:29.260 "data_size": 65536 00:08:29.261 } 00:08:29.261 ] 00:08:29.261 }' 00:08:29.261 11:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.261 11:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.521 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.521 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:29.521 11:51:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.521 11:51:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.521 11:51:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.521 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:29.521 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:29.521 11:51:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.521 11:51:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.521 [2024-12-06 11:51:27.270952] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:29.780 11:51:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.780 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:29.780 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.780 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.780 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:29.780 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:29.780 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.780 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.780 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.780 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.780 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.780 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.780 11:51:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.780 11:51:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.780 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.780 11:51:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.780 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.780 "name": "Existed_Raid", 00:08:29.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.780 "strip_size_kb": 0, 00:08:29.780 "state": "configuring", 00:08:29.780 "raid_level": "raid1", 00:08:29.780 "superblock": false, 00:08:29.780 "num_base_bdevs": 3, 00:08:29.780 "num_base_bdevs_discovered": 2, 00:08:29.780 "num_base_bdevs_operational": 3, 00:08:29.780 "base_bdevs_list": [ 00:08:29.780 { 00:08:29.780 "name": "BaseBdev1", 00:08:29.780 "uuid": "f4dec256-ddbc-4817-b160-13c01ddb73ab", 00:08:29.780 "is_configured": true, 00:08:29.780 "data_offset": 0, 00:08:29.780 "data_size": 65536 00:08:29.780 }, 00:08:29.780 { 00:08:29.780 "name": null, 00:08:29.780 "uuid": "8d5bc828-daab-4008-8c3b-7f99fb700f38", 00:08:29.780 "is_configured": false, 00:08:29.780 "data_offset": 0, 00:08:29.780 "data_size": 65536 00:08:29.780 }, 00:08:29.780 { 00:08:29.780 "name": "BaseBdev3", 00:08:29.780 "uuid": "001ae9ae-04d8-40d0-8021-c18f5d921c88", 00:08:29.780 "is_configured": true, 00:08:29.780 "data_offset": 0, 00:08:29.780 "data_size": 65536 00:08:29.780 } 00:08:29.780 ] 00:08:29.780 }' 00:08:29.780 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.780 11:51:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.040 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.040 11:51:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.040 11:51:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.040 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:30.040 11:51:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.040 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:30.040 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:30.040 11:51:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.040 11:51:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.040 [2024-12-06 11:51:27.722196] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:30.040 11:51:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.040 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:30.040 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.040 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.040 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:30.040 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:30.040 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.040 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.040 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.040 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.040 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.040 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.040 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.040 11:51:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.040 11:51:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.040 11:51:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.040 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.040 "name": "Existed_Raid", 00:08:30.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.040 "strip_size_kb": 0, 00:08:30.040 "state": "configuring", 00:08:30.040 "raid_level": "raid1", 00:08:30.040 "superblock": false, 00:08:30.040 "num_base_bdevs": 3, 00:08:30.041 "num_base_bdevs_discovered": 1, 00:08:30.041 "num_base_bdevs_operational": 3, 00:08:30.041 "base_bdevs_list": [ 00:08:30.041 { 00:08:30.041 "name": null, 00:08:30.041 "uuid": "f4dec256-ddbc-4817-b160-13c01ddb73ab", 00:08:30.041 "is_configured": false, 00:08:30.041 "data_offset": 0, 00:08:30.041 "data_size": 65536 00:08:30.041 }, 00:08:30.041 { 00:08:30.041 "name": null, 00:08:30.041 "uuid": "8d5bc828-daab-4008-8c3b-7f99fb700f38", 00:08:30.041 "is_configured": false, 00:08:30.041 "data_offset": 0, 00:08:30.041 "data_size": 65536 00:08:30.041 }, 00:08:30.041 { 00:08:30.041 "name": "BaseBdev3", 00:08:30.041 "uuid": "001ae9ae-04d8-40d0-8021-c18f5d921c88", 00:08:30.041 "is_configured": true, 00:08:30.041 "data_offset": 0, 00:08:30.041 "data_size": 65536 00:08:30.041 } 00:08:30.041 ] 00:08:30.041 }' 00:08:30.041 11:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.041 11:51:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.608 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.608 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.608 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.608 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:30.608 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.608 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:30.608 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:30.608 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.608 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.608 [2024-12-06 11:51:28.227816] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:30.608 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.608 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:30.608 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.608 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.608 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:30.608 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:30.608 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.608 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.608 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.608 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.608 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.608 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.608 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.608 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.608 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.608 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.608 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.608 "name": "Existed_Raid", 00:08:30.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.608 "strip_size_kb": 0, 00:08:30.608 "state": "configuring", 00:08:30.608 "raid_level": "raid1", 00:08:30.608 "superblock": false, 00:08:30.608 "num_base_bdevs": 3, 00:08:30.608 "num_base_bdevs_discovered": 2, 00:08:30.608 "num_base_bdevs_operational": 3, 00:08:30.608 "base_bdevs_list": [ 00:08:30.608 { 00:08:30.608 "name": null, 00:08:30.608 "uuid": "f4dec256-ddbc-4817-b160-13c01ddb73ab", 00:08:30.608 "is_configured": false, 00:08:30.608 "data_offset": 0, 00:08:30.608 "data_size": 65536 00:08:30.608 }, 00:08:30.608 { 00:08:30.608 "name": "BaseBdev2", 00:08:30.608 "uuid": "8d5bc828-daab-4008-8c3b-7f99fb700f38", 00:08:30.608 "is_configured": true, 00:08:30.608 "data_offset": 0, 00:08:30.608 "data_size": 65536 00:08:30.608 }, 00:08:30.608 { 00:08:30.608 "name": "BaseBdev3", 00:08:30.608 "uuid": "001ae9ae-04d8-40d0-8021-c18f5d921c88", 00:08:30.608 "is_configured": true, 00:08:30.608 "data_offset": 0, 00:08:30.608 "data_size": 65536 00:08:30.608 } 00:08:30.608 ] 00:08:30.608 }' 00:08:30.608 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.608 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f4dec256-ddbc-4817-b160-13c01ddb73ab 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.176 [2024-12-06 11:51:28.773757] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:31.176 [2024-12-06 11:51:28.773805] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:31.176 [2024-12-06 11:51:28.773813] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:31.176 [2024-12-06 11:51:28.774056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:31.176 [2024-12-06 11:51:28.774178] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:31.176 [2024-12-06 11:51:28.774194] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:31.176 [2024-12-06 11:51:28.774379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.176 NewBaseBdev 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.176 [ 00:08:31.176 { 00:08:31.176 "name": "NewBaseBdev", 00:08:31.176 "aliases": [ 00:08:31.176 "f4dec256-ddbc-4817-b160-13c01ddb73ab" 00:08:31.176 ], 00:08:31.176 "product_name": "Malloc disk", 00:08:31.176 "block_size": 512, 00:08:31.176 "num_blocks": 65536, 00:08:31.176 "uuid": "f4dec256-ddbc-4817-b160-13c01ddb73ab", 00:08:31.176 "assigned_rate_limits": { 00:08:31.176 "rw_ios_per_sec": 0, 00:08:31.176 "rw_mbytes_per_sec": 0, 00:08:31.176 "r_mbytes_per_sec": 0, 00:08:31.176 "w_mbytes_per_sec": 0 00:08:31.176 }, 00:08:31.176 "claimed": true, 00:08:31.176 "claim_type": "exclusive_write", 00:08:31.176 "zoned": false, 00:08:31.176 "supported_io_types": { 00:08:31.176 "read": true, 00:08:31.176 "write": true, 00:08:31.176 "unmap": true, 00:08:31.176 "flush": true, 00:08:31.176 "reset": true, 00:08:31.176 "nvme_admin": false, 00:08:31.176 "nvme_io": false, 00:08:31.176 "nvme_io_md": false, 00:08:31.176 "write_zeroes": true, 00:08:31.176 "zcopy": true, 00:08:31.176 "get_zone_info": false, 00:08:31.176 "zone_management": false, 00:08:31.176 "zone_append": false, 00:08:31.176 "compare": false, 00:08:31.176 "compare_and_write": false, 00:08:31.176 "abort": true, 00:08:31.176 "seek_hole": false, 00:08:31.176 "seek_data": false, 00:08:31.176 "copy": true, 00:08:31.176 "nvme_iov_md": false 00:08:31.176 }, 00:08:31.176 "memory_domains": [ 00:08:31.176 { 00:08:31.176 "dma_device_id": "system", 00:08:31.176 "dma_device_type": 1 00:08:31.176 }, 00:08:31.176 { 00:08:31.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.176 "dma_device_type": 2 00:08:31.176 } 00:08:31.176 ], 00:08:31.176 "driver_specific": {} 00:08:31.176 } 00:08:31.176 ] 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.176 "name": "Existed_Raid", 00:08:31.176 "uuid": "885f2dbb-4662-44a0-8173-e054b05d8321", 00:08:31.176 "strip_size_kb": 0, 00:08:31.176 "state": "online", 00:08:31.176 "raid_level": "raid1", 00:08:31.176 "superblock": false, 00:08:31.176 "num_base_bdevs": 3, 00:08:31.176 "num_base_bdevs_discovered": 3, 00:08:31.176 "num_base_bdevs_operational": 3, 00:08:31.176 "base_bdevs_list": [ 00:08:31.176 { 00:08:31.176 "name": "NewBaseBdev", 00:08:31.176 "uuid": "f4dec256-ddbc-4817-b160-13c01ddb73ab", 00:08:31.176 "is_configured": true, 00:08:31.176 "data_offset": 0, 00:08:31.176 "data_size": 65536 00:08:31.176 }, 00:08:31.176 { 00:08:31.176 "name": "BaseBdev2", 00:08:31.176 "uuid": "8d5bc828-daab-4008-8c3b-7f99fb700f38", 00:08:31.176 "is_configured": true, 00:08:31.176 "data_offset": 0, 00:08:31.176 "data_size": 65536 00:08:31.176 }, 00:08:31.176 { 00:08:31.176 "name": "BaseBdev3", 00:08:31.176 "uuid": "001ae9ae-04d8-40d0-8021-c18f5d921c88", 00:08:31.176 "is_configured": true, 00:08:31.176 "data_offset": 0, 00:08:31.176 "data_size": 65536 00:08:31.176 } 00:08:31.176 ] 00:08:31.176 }' 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.176 11:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.744 [2024-12-06 11:51:29.281204] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:31.744 "name": "Existed_Raid", 00:08:31.744 "aliases": [ 00:08:31.744 "885f2dbb-4662-44a0-8173-e054b05d8321" 00:08:31.744 ], 00:08:31.744 "product_name": "Raid Volume", 00:08:31.744 "block_size": 512, 00:08:31.744 "num_blocks": 65536, 00:08:31.744 "uuid": "885f2dbb-4662-44a0-8173-e054b05d8321", 00:08:31.744 "assigned_rate_limits": { 00:08:31.744 "rw_ios_per_sec": 0, 00:08:31.744 "rw_mbytes_per_sec": 0, 00:08:31.744 "r_mbytes_per_sec": 0, 00:08:31.744 "w_mbytes_per_sec": 0 00:08:31.744 }, 00:08:31.744 "claimed": false, 00:08:31.744 "zoned": false, 00:08:31.744 "supported_io_types": { 00:08:31.744 "read": true, 00:08:31.744 "write": true, 00:08:31.744 "unmap": false, 00:08:31.744 "flush": false, 00:08:31.744 "reset": true, 00:08:31.744 "nvme_admin": false, 00:08:31.744 "nvme_io": false, 00:08:31.744 "nvme_io_md": false, 00:08:31.744 "write_zeroes": true, 00:08:31.744 "zcopy": false, 00:08:31.744 "get_zone_info": false, 00:08:31.744 "zone_management": false, 00:08:31.744 "zone_append": false, 00:08:31.744 "compare": false, 00:08:31.744 "compare_and_write": false, 00:08:31.744 "abort": false, 00:08:31.744 "seek_hole": false, 00:08:31.744 "seek_data": false, 00:08:31.744 "copy": false, 00:08:31.744 "nvme_iov_md": false 00:08:31.744 }, 00:08:31.744 "memory_domains": [ 00:08:31.744 { 00:08:31.744 "dma_device_id": "system", 00:08:31.744 "dma_device_type": 1 00:08:31.744 }, 00:08:31.744 { 00:08:31.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.744 "dma_device_type": 2 00:08:31.744 }, 00:08:31.744 { 00:08:31.744 "dma_device_id": "system", 00:08:31.744 "dma_device_type": 1 00:08:31.744 }, 00:08:31.744 { 00:08:31.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.744 "dma_device_type": 2 00:08:31.744 }, 00:08:31.744 { 00:08:31.744 "dma_device_id": "system", 00:08:31.744 "dma_device_type": 1 00:08:31.744 }, 00:08:31.744 { 00:08:31.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.744 "dma_device_type": 2 00:08:31.744 } 00:08:31.744 ], 00:08:31.744 "driver_specific": { 00:08:31.744 "raid": { 00:08:31.744 "uuid": "885f2dbb-4662-44a0-8173-e054b05d8321", 00:08:31.744 "strip_size_kb": 0, 00:08:31.744 "state": "online", 00:08:31.744 "raid_level": "raid1", 00:08:31.744 "superblock": false, 00:08:31.744 "num_base_bdevs": 3, 00:08:31.744 "num_base_bdevs_discovered": 3, 00:08:31.744 "num_base_bdevs_operational": 3, 00:08:31.744 "base_bdevs_list": [ 00:08:31.744 { 00:08:31.744 "name": "NewBaseBdev", 00:08:31.744 "uuid": "f4dec256-ddbc-4817-b160-13c01ddb73ab", 00:08:31.744 "is_configured": true, 00:08:31.744 "data_offset": 0, 00:08:31.744 "data_size": 65536 00:08:31.744 }, 00:08:31.744 { 00:08:31.744 "name": "BaseBdev2", 00:08:31.744 "uuid": "8d5bc828-daab-4008-8c3b-7f99fb700f38", 00:08:31.744 "is_configured": true, 00:08:31.744 "data_offset": 0, 00:08:31.744 "data_size": 65536 00:08:31.744 }, 00:08:31.744 { 00:08:31.744 "name": "BaseBdev3", 00:08:31.744 "uuid": "001ae9ae-04d8-40d0-8021-c18f5d921c88", 00:08:31.744 "is_configured": true, 00:08:31.744 "data_offset": 0, 00:08:31.744 "data_size": 65536 00:08:31.744 } 00:08:31.744 ] 00:08:31.744 } 00:08:31.744 } 00:08:31.744 }' 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:31.744 BaseBdev2 00:08:31.744 BaseBdev3' 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.744 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:31.745 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.004 11:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.004 11:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.004 11:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.004 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.004 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.004 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:32.004 11:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.004 11:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.004 [2024-12-06 11:51:29.544478] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:32.004 [2024-12-06 11:51:29.544505] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:32.004 [2024-12-06 11:51:29.544563] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:32.004 [2024-12-06 11:51:29.544807] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:32.004 [2024-12-06 11:51:29.544824] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:32.004 11:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.004 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78133 00:08:32.004 11:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 78133 ']' 00:08:32.004 11:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 78133 00:08:32.004 11:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:32.004 11:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:32.004 11:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78133 00:08:32.004 killing process with pid 78133 00:08:32.004 11:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:32.004 11:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:32.004 11:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78133' 00:08:32.004 11:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 78133 00:08:32.004 [2024-12-06 11:51:29.595241] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:32.004 11:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 78133 00:08:32.004 [2024-12-06 11:51:29.626396] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:32.263 ************************************ 00:08:32.263 END TEST raid_state_function_test 00:08:32.263 ************************************ 00:08:32.263 11:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:32.263 00:08:32.263 real 0m8.713s 00:08:32.263 user 0m14.916s 00:08:32.263 sys 0m1.682s 00:08:32.263 11:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.263 11:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.263 11:51:29 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:08:32.264 11:51:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:32.264 11:51:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.264 11:51:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:32.264 ************************************ 00:08:32.264 START TEST raid_state_function_test_sb 00:08:32.264 ************************************ 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=78737 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:32.264 Process raid pid: 78737 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78737' 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 78737 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 78737 ']' 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:32.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:32.264 11:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.524 [2024-12-06 11:51:30.023440] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:32.524 [2024-12-06 11:51:30.023572] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.524 [2024-12-06 11:51:30.164413] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.524 [2024-12-06 11:51:30.207793] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.524 [2024-12-06 11:51:30.248965] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.524 [2024-12-06 11:51:30.249002] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.462 11:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:33.462 11:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:33.463 11:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:33.463 11:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.463 11:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.463 [2024-12-06 11:51:30.857479] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:33.463 [2024-12-06 11:51:30.857541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:33.463 [2024-12-06 11:51:30.857552] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:33.463 [2024-12-06 11:51:30.857561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:33.463 [2024-12-06 11:51:30.857567] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:33.463 [2024-12-06 11:51:30.857579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:33.463 11:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.463 11:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:33.463 11:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.463 11:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.463 11:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:33.463 11:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:33.463 11:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.463 11:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.463 11:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.463 11:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.463 11:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.463 11:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.463 11:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.463 11:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.463 11:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.463 11:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.463 11:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.463 "name": "Existed_Raid", 00:08:33.463 "uuid": "a5dbc8c8-b052-4c1c-9c30-1834204370f5", 00:08:33.463 "strip_size_kb": 0, 00:08:33.463 "state": "configuring", 00:08:33.463 "raid_level": "raid1", 00:08:33.463 "superblock": true, 00:08:33.463 "num_base_bdevs": 3, 00:08:33.463 "num_base_bdevs_discovered": 0, 00:08:33.463 "num_base_bdevs_operational": 3, 00:08:33.463 "base_bdevs_list": [ 00:08:33.463 { 00:08:33.463 "name": "BaseBdev1", 00:08:33.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.463 "is_configured": false, 00:08:33.463 "data_offset": 0, 00:08:33.463 "data_size": 0 00:08:33.463 }, 00:08:33.463 { 00:08:33.463 "name": "BaseBdev2", 00:08:33.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.463 "is_configured": false, 00:08:33.463 "data_offset": 0, 00:08:33.463 "data_size": 0 00:08:33.463 }, 00:08:33.463 { 00:08:33.463 "name": "BaseBdev3", 00:08:33.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.463 "is_configured": false, 00:08:33.463 "data_offset": 0, 00:08:33.463 "data_size": 0 00:08:33.463 } 00:08:33.463 ] 00:08:33.463 }' 00:08:33.463 11:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.463 11:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.723 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:33.723 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.723 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.723 [2024-12-06 11:51:31.280690] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:33.723 [2024-12-06 11:51:31.280731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:33.723 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.723 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:33.723 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.723 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.723 [2024-12-06 11:51:31.288695] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:33.723 [2024-12-06 11:51:31.288733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:33.723 [2024-12-06 11:51:31.288741] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:33.723 [2024-12-06 11:51:31.288750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:33.723 [2024-12-06 11:51:31.288755] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:33.723 [2024-12-06 11:51:31.288763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:33.723 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.723 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:33.723 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.723 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.723 [2024-12-06 11:51:31.305244] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:33.723 BaseBdev1 00:08:33.723 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.723 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:33.723 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:33.723 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.724 [ 00:08:33.724 { 00:08:33.724 "name": "BaseBdev1", 00:08:33.724 "aliases": [ 00:08:33.724 "db2a8277-8c98-4864-93b1-2e564b3f8cc2" 00:08:33.724 ], 00:08:33.724 "product_name": "Malloc disk", 00:08:33.724 "block_size": 512, 00:08:33.724 "num_blocks": 65536, 00:08:33.724 "uuid": "db2a8277-8c98-4864-93b1-2e564b3f8cc2", 00:08:33.724 "assigned_rate_limits": { 00:08:33.724 "rw_ios_per_sec": 0, 00:08:33.724 "rw_mbytes_per_sec": 0, 00:08:33.724 "r_mbytes_per_sec": 0, 00:08:33.724 "w_mbytes_per_sec": 0 00:08:33.724 }, 00:08:33.724 "claimed": true, 00:08:33.724 "claim_type": "exclusive_write", 00:08:33.724 "zoned": false, 00:08:33.724 "supported_io_types": { 00:08:33.724 "read": true, 00:08:33.724 "write": true, 00:08:33.724 "unmap": true, 00:08:33.724 "flush": true, 00:08:33.724 "reset": true, 00:08:33.724 "nvme_admin": false, 00:08:33.724 "nvme_io": false, 00:08:33.724 "nvme_io_md": false, 00:08:33.724 "write_zeroes": true, 00:08:33.724 "zcopy": true, 00:08:33.724 "get_zone_info": false, 00:08:33.724 "zone_management": false, 00:08:33.724 "zone_append": false, 00:08:33.724 "compare": false, 00:08:33.724 "compare_and_write": false, 00:08:33.724 "abort": true, 00:08:33.724 "seek_hole": false, 00:08:33.724 "seek_data": false, 00:08:33.724 "copy": true, 00:08:33.724 "nvme_iov_md": false 00:08:33.724 }, 00:08:33.724 "memory_domains": [ 00:08:33.724 { 00:08:33.724 "dma_device_id": "system", 00:08:33.724 "dma_device_type": 1 00:08:33.724 }, 00:08:33.724 { 00:08:33.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.724 "dma_device_type": 2 00:08:33.724 } 00:08:33.724 ], 00:08:33.724 "driver_specific": {} 00:08:33.724 } 00:08:33.724 ] 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.724 "name": "Existed_Raid", 00:08:33.724 "uuid": "1023a557-a28f-4d6e-b02f-ca188280ea97", 00:08:33.724 "strip_size_kb": 0, 00:08:33.724 "state": "configuring", 00:08:33.724 "raid_level": "raid1", 00:08:33.724 "superblock": true, 00:08:33.724 "num_base_bdevs": 3, 00:08:33.724 "num_base_bdevs_discovered": 1, 00:08:33.724 "num_base_bdevs_operational": 3, 00:08:33.724 "base_bdevs_list": [ 00:08:33.724 { 00:08:33.724 "name": "BaseBdev1", 00:08:33.724 "uuid": "db2a8277-8c98-4864-93b1-2e564b3f8cc2", 00:08:33.724 "is_configured": true, 00:08:33.724 "data_offset": 2048, 00:08:33.724 "data_size": 63488 00:08:33.724 }, 00:08:33.724 { 00:08:33.724 "name": "BaseBdev2", 00:08:33.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.724 "is_configured": false, 00:08:33.724 "data_offset": 0, 00:08:33.724 "data_size": 0 00:08:33.724 }, 00:08:33.724 { 00:08:33.724 "name": "BaseBdev3", 00:08:33.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.724 "is_configured": false, 00:08:33.724 "data_offset": 0, 00:08:33.724 "data_size": 0 00:08:33.724 } 00:08:33.724 ] 00:08:33.724 }' 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.724 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.317 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:34.317 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.317 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.317 [2024-12-06 11:51:31.776448] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:34.317 [2024-12-06 11:51:31.776489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:34.317 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.317 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:34.317 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.317 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.317 [2024-12-06 11:51:31.788467] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:34.317 [2024-12-06 11:51:31.790230] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:34.317 [2024-12-06 11:51:31.790278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:34.317 [2024-12-06 11:51:31.790287] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:34.317 [2024-12-06 11:51:31.790296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:34.317 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.317 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:34.318 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:34.318 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:34.318 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.318 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.318 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:34.318 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:34.318 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.318 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.318 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.318 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.318 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.318 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.318 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.318 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.318 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.318 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.318 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.318 "name": "Existed_Raid", 00:08:34.318 "uuid": "3f5221a4-b2d6-421d-b2f1-e0e807e5a5f3", 00:08:34.318 "strip_size_kb": 0, 00:08:34.318 "state": "configuring", 00:08:34.318 "raid_level": "raid1", 00:08:34.318 "superblock": true, 00:08:34.318 "num_base_bdevs": 3, 00:08:34.318 "num_base_bdevs_discovered": 1, 00:08:34.318 "num_base_bdevs_operational": 3, 00:08:34.318 "base_bdevs_list": [ 00:08:34.318 { 00:08:34.318 "name": "BaseBdev1", 00:08:34.318 "uuid": "db2a8277-8c98-4864-93b1-2e564b3f8cc2", 00:08:34.318 "is_configured": true, 00:08:34.318 "data_offset": 2048, 00:08:34.318 "data_size": 63488 00:08:34.318 }, 00:08:34.318 { 00:08:34.318 "name": "BaseBdev2", 00:08:34.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.318 "is_configured": false, 00:08:34.318 "data_offset": 0, 00:08:34.318 "data_size": 0 00:08:34.318 }, 00:08:34.318 { 00:08:34.318 "name": "BaseBdev3", 00:08:34.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.318 "is_configured": false, 00:08:34.318 "data_offset": 0, 00:08:34.318 "data_size": 0 00:08:34.318 } 00:08:34.318 ] 00:08:34.318 }' 00:08:34.318 11:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.318 11:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.578 [2024-12-06 11:51:32.276009] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:34.578 BaseBdev2 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.578 [ 00:08:34.578 { 00:08:34.578 "name": "BaseBdev2", 00:08:34.578 "aliases": [ 00:08:34.578 "8a4cb6b9-fde5-4d9b-99ca-d2c9b6884cb4" 00:08:34.578 ], 00:08:34.578 "product_name": "Malloc disk", 00:08:34.578 "block_size": 512, 00:08:34.578 "num_blocks": 65536, 00:08:34.578 "uuid": "8a4cb6b9-fde5-4d9b-99ca-d2c9b6884cb4", 00:08:34.578 "assigned_rate_limits": { 00:08:34.578 "rw_ios_per_sec": 0, 00:08:34.578 "rw_mbytes_per_sec": 0, 00:08:34.578 "r_mbytes_per_sec": 0, 00:08:34.578 "w_mbytes_per_sec": 0 00:08:34.578 }, 00:08:34.578 "claimed": true, 00:08:34.578 "claim_type": "exclusive_write", 00:08:34.578 "zoned": false, 00:08:34.578 "supported_io_types": { 00:08:34.578 "read": true, 00:08:34.578 "write": true, 00:08:34.578 "unmap": true, 00:08:34.578 "flush": true, 00:08:34.578 "reset": true, 00:08:34.578 "nvme_admin": false, 00:08:34.578 "nvme_io": false, 00:08:34.578 "nvme_io_md": false, 00:08:34.578 "write_zeroes": true, 00:08:34.578 "zcopy": true, 00:08:34.578 "get_zone_info": false, 00:08:34.578 "zone_management": false, 00:08:34.578 "zone_append": false, 00:08:34.578 "compare": false, 00:08:34.578 "compare_and_write": false, 00:08:34.578 "abort": true, 00:08:34.578 "seek_hole": false, 00:08:34.578 "seek_data": false, 00:08:34.578 "copy": true, 00:08:34.578 "nvme_iov_md": false 00:08:34.578 }, 00:08:34.578 "memory_domains": [ 00:08:34.578 { 00:08:34.578 "dma_device_id": "system", 00:08:34.578 "dma_device_type": 1 00:08:34.578 }, 00:08:34.578 { 00:08:34.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.578 "dma_device_type": 2 00:08:34.578 } 00:08:34.578 ], 00:08:34.578 "driver_specific": {} 00:08:34.578 } 00:08:34.578 ] 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.578 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.838 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.838 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.838 "name": "Existed_Raid", 00:08:34.838 "uuid": "3f5221a4-b2d6-421d-b2f1-e0e807e5a5f3", 00:08:34.838 "strip_size_kb": 0, 00:08:34.838 "state": "configuring", 00:08:34.838 "raid_level": "raid1", 00:08:34.838 "superblock": true, 00:08:34.838 "num_base_bdevs": 3, 00:08:34.838 "num_base_bdevs_discovered": 2, 00:08:34.838 "num_base_bdevs_operational": 3, 00:08:34.838 "base_bdevs_list": [ 00:08:34.838 { 00:08:34.838 "name": "BaseBdev1", 00:08:34.838 "uuid": "db2a8277-8c98-4864-93b1-2e564b3f8cc2", 00:08:34.838 "is_configured": true, 00:08:34.838 "data_offset": 2048, 00:08:34.838 "data_size": 63488 00:08:34.838 }, 00:08:34.838 { 00:08:34.838 "name": "BaseBdev2", 00:08:34.838 "uuid": "8a4cb6b9-fde5-4d9b-99ca-d2c9b6884cb4", 00:08:34.838 "is_configured": true, 00:08:34.838 "data_offset": 2048, 00:08:34.838 "data_size": 63488 00:08:34.838 }, 00:08:34.838 { 00:08:34.838 "name": "BaseBdev3", 00:08:34.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.838 "is_configured": false, 00:08:34.838 "data_offset": 0, 00:08:34.838 "data_size": 0 00:08:34.838 } 00:08:34.838 ] 00:08:34.838 }' 00:08:34.838 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.838 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.098 [2024-12-06 11:51:32.762099] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:35.098 [2024-12-06 11:51:32.762289] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:35.098 [2024-12-06 11:51:32.762317] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:35.098 [2024-12-06 11:51:32.762604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:35.098 BaseBdev3 00:08:35.098 [2024-12-06 11:51:32.762745] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:35.098 [2024-12-06 11:51:32.762766] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:35.098 [2024-12-06 11:51:32.762897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.098 [ 00:08:35.098 { 00:08:35.098 "name": "BaseBdev3", 00:08:35.098 "aliases": [ 00:08:35.098 "fcfb6fc1-2aaf-46f7-8e43-56fd64e88cc8" 00:08:35.098 ], 00:08:35.098 "product_name": "Malloc disk", 00:08:35.098 "block_size": 512, 00:08:35.098 "num_blocks": 65536, 00:08:35.098 "uuid": "fcfb6fc1-2aaf-46f7-8e43-56fd64e88cc8", 00:08:35.098 "assigned_rate_limits": { 00:08:35.098 "rw_ios_per_sec": 0, 00:08:35.098 "rw_mbytes_per_sec": 0, 00:08:35.098 "r_mbytes_per_sec": 0, 00:08:35.098 "w_mbytes_per_sec": 0 00:08:35.098 }, 00:08:35.098 "claimed": true, 00:08:35.098 "claim_type": "exclusive_write", 00:08:35.098 "zoned": false, 00:08:35.098 "supported_io_types": { 00:08:35.098 "read": true, 00:08:35.098 "write": true, 00:08:35.098 "unmap": true, 00:08:35.098 "flush": true, 00:08:35.098 "reset": true, 00:08:35.098 "nvme_admin": false, 00:08:35.098 "nvme_io": false, 00:08:35.098 "nvme_io_md": false, 00:08:35.098 "write_zeroes": true, 00:08:35.098 "zcopy": true, 00:08:35.098 "get_zone_info": false, 00:08:35.098 "zone_management": false, 00:08:35.098 "zone_append": false, 00:08:35.098 "compare": false, 00:08:35.098 "compare_and_write": false, 00:08:35.098 "abort": true, 00:08:35.098 "seek_hole": false, 00:08:35.098 "seek_data": false, 00:08:35.098 "copy": true, 00:08:35.098 "nvme_iov_md": false 00:08:35.098 }, 00:08:35.098 "memory_domains": [ 00:08:35.098 { 00:08:35.098 "dma_device_id": "system", 00:08:35.098 "dma_device_type": 1 00:08:35.098 }, 00:08:35.098 { 00:08:35.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.098 "dma_device_type": 2 00:08:35.098 } 00:08:35.098 ], 00:08:35.098 "driver_specific": {} 00:08:35.098 } 00:08:35.098 ] 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.098 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.099 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.099 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.099 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.099 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.099 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.099 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.099 "name": "Existed_Raid", 00:08:35.099 "uuid": "3f5221a4-b2d6-421d-b2f1-e0e807e5a5f3", 00:08:35.099 "strip_size_kb": 0, 00:08:35.099 "state": "online", 00:08:35.099 "raid_level": "raid1", 00:08:35.099 "superblock": true, 00:08:35.099 "num_base_bdevs": 3, 00:08:35.099 "num_base_bdevs_discovered": 3, 00:08:35.099 "num_base_bdevs_operational": 3, 00:08:35.099 "base_bdevs_list": [ 00:08:35.099 { 00:08:35.099 "name": "BaseBdev1", 00:08:35.099 "uuid": "db2a8277-8c98-4864-93b1-2e564b3f8cc2", 00:08:35.099 "is_configured": true, 00:08:35.099 "data_offset": 2048, 00:08:35.099 "data_size": 63488 00:08:35.099 }, 00:08:35.099 { 00:08:35.099 "name": "BaseBdev2", 00:08:35.099 "uuid": "8a4cb6b9-fde5-4d9b-99ca-d2c9b6884cb4", 00:08:35.099 "is_configured": true, 00:08:35.099 "data_offset": 2048, 00:08:35.099 "data_size": 63488 00:08:35.099 }, 00:08:35.099 { 00:08:35.099 "name": "BaseBdev3", 00:08:35.099 "uuid": "fcfb6fc1-2aaf-46f7-8e43-56fd64e88cc8", 00:08:35.099 "is_configured": true, 00:08:35.099 "data_offset": 2048, 00:08:35.099 "data_size": 63488 00:08:35.099 } 00:08:35.099 ] 00:08:35.099 }' 00:08:35.099 11:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.099 11:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.669 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:35.669 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:35.669 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:35.669 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:35.669 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:35.669 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:35.669 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:35.669 11:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.669 11:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.669 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:35.669 [2024-12-06 11:51:33.253566] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.669 11:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.669 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:35.669 "name": "Existed_Raid", 00:08:35.669 "aliases": [ 00:08:35.669 "3f5221a4-b2d6-421d-b2f1-e0e807e5a5f3" 00:08:35.669 ], 00:08:35.669 "product_name": "Raid Volume", 00:08:35.669 "block_size": 512, 00:08:35.669 "num_blocks": 63488, 00:08:35.669 "uuid": "3f5221a4-b2d6-421d-b2f1-e0e807e5a5f3", 00:08:35.669 "assigned_rate_limits": { 00:08:35.669 "rw_ios_per_sec": 0, 00:08:35.669 "rw_mbytes_per_sec": 0, 00:08:35.669 "r_mbytes_per_sec": 0, 00:08:35.669 "w_mbytes_per_sec": 0 00:08:35.669 }, 00:08:35.669 "claimed": false, 00:08:35.669 "zoned": false, 00:08:35.670 "supported_io_types": { 00:08:35.670 "read": true, 00:08:35.670 "write": true, 00:08:35.670 "unmap": false, 00:08:35.670 "flush": false, 00:08:35.670 "reset": true, 00:08:35.670 "nvme_admin": false, 00:08:35.670 "nvme_io": false, 00:08:35.670 "nvme_io_md": false, 00:08:35.670 "write_zeroes": true, 00:08:35.670 "zcopy": false, 00:08:35.670 "get_zone_info": false, 00:08:35.670 "zone_management": false, 00:08:35.670 "zone_append": false, 00:08:35.670 "compare": false, 00:08:35.670 "compare_and_write": false, 00:08:35.670 "abort": false, 00:08:35.670 "seek_hole": false, 00:08:35.670 "seek_data": false, 00:08:35.670 "copy": false, 00:08:35.670 "nvme_iov_md": false 00:08:35.670 }, 00:08:35.670 "memory_domains": [ 00:08:35.670 { 00:08:35.670 "dma_device_id": "system", 00:08:35.670 "dma_device_type": 1 00:08:35.670 }, 00:08:35.670 { 00:08:35.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.670 "dma_device_type": 2 00:08:35.670 }, 00:08:35.670 { 00:08:35.670 "dma_device_id": "system", 00:08:35.670 "dma_device_type": 1 00:08:35.670 }, 00:08:35.670 { 00:08:35.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.670 "dma_device_type": 2 00:08:35.670 }, 00:08:35.670 { 00:08:35.670 "dma_device_id": "system", 00:08:35.670 "dma_device_type": 1 00:08:35.670 }, 00:08:35.670 { 00:08:35.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.670 "dma_device_type": 2 00:08:35.670 } 00:08:35.670 ], 00:08:35.670 "driver_specific": { 00:08:35.670 "raid": { 00:08:35.670 "uuid": "3f5221a4-b2d6-421d-b2f1-e0e807e5a5f3", 00:08:35.670 "strip_size_kb": 0, 00:08:35.670 "state": "online", 00:08:35.670 "raid_level": "raid1", 00:08:35.670 "superblock": true, 00:08:35.670 "num_base_bdevs": 3, 00:08:35.670 "num_base_bdevs_discovered": 3, 00:08:35.670 "num_base_bdevs_operational": 3, 00:08:35.670 "base_bdevs_list": [ 00:08:35.670 { 00:08:35.670 "name": "BaseBdev1", 00:08:35.670 "uuid": "db2a8277-8c98-4864-93b1-2e564b3f8cc2", 00:08:35.670 "is_configured": true, 00:08:35.670 "data_offset": 2048, 00:08:35.670 "data_size": 63488 00:08:35.670 }, 00:08:35.670 { 00:08:35.670 "name": "BaseBdev2", 00:08:35.670 "uuid": "8a4cb6b9-fde5-4d9b-99ca-d2c9b6884cb4", 00:08:35.670 "is_configured": true, 00:08:35.670 "data_offset": 2048, 00:08:35.670 "data_size": 63488 00:08:35.670 }, 00:08:35.670 { 00:08:35.670 "name": "BaseBdev3", 00:08:35.670 "uuid": "fcfb6fc1-2aaf-46f7-8e43-56fd64e88cc8", 00:08:35.670 "is_configured": true, 00:08:35.670 "data_offset": 2048, 00:08:35.670 "data_size": 63488 00:08:35.670 } 00:08:35.670 ] 00:08:35.670 } 00:08:35.670 } 00:08:35.670 }' 00:08:35.670 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:35.670 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:35.670 BaseBdev2 00:08:35.670 BaseBdev3' 00:08:35.670 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.670 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:35.670 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.670 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:35.670 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.670 11:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.670 11:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.670 11:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.670 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.670 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.670 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.670 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.670 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:35.670 11:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.670 11:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.929 11:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.929 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.929 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.929 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.929 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:35.929 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.929 11:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.929 11:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.929 11:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.929 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.929 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.929 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:35.930 11:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.930 11:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.930 [2024-12-06 11:51:33.508917] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:35.930 11:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.930 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:35.930 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:35.930 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:35.930 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:35.930 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:35.930 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:35.930 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.930 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.930 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:35.930 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:35.930 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:35.930 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.930 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.930 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.930 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.930 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.930 11:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.930 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.930 11:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.930 11:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.930 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.930 "name": "Existed_Raid", 00:08:35.930 "uuid": "3f5221a4-b2d6-421d-b2f1-e0e807e5a5f3", 00:08:35.930 "strip_size_kb": 0, 00:08:35.930 "state": "online", 00:08:35.930 "raid_level": "raid1", 00:08:35.930 "superblock": true, 00:08:35.930 "num_base_bdevs": 3, 00:08:35.930 "num_base_bdevs_discovered": 2, 00:08:35.930 "num_base_bdevs_operational": 2, 00:08:35.930 "base_bdevs_list": [ 00:08:35.930 { 00:08:35.930 "name": null, 00:08:35.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.930 "is_configured": false, 00:08:35.930 "data_offset": 0, 00:08:35.930 "data_size": 63488 00:08:35.930 }, 00:08:35.930 { 00:08:35.930 "name": "BaseBdev2", 00:08:35.930 "uuid": "8a4cb6b9-fde5-4d9b-99ca-d2c9b6884cb4", 00:08:35.930 "is_configured": true, 00:08:35.930 "data_offset": 2048, 00:08:35.930 "data_size": 63488 00:08:35.930 }, 00:08:35.930 { 00:08:35.930 "name": "BaseBdev3", 00:08:35.930 "uuid": "fcfb6fc1-2aaf-46f7-8e43-56fd64e88cc8", 00:08:35.930 "is_configured": true, 00:08:35.930 "data_offset": 2048, 00:08:35.930 "data_size": 63488 00:08:35.930 } 00:08:35.930 ] 00:08:35.930 }' 00:08:35.930 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.930 11:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.499 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:36.499 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:36.499 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:36.499 11:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.499 11:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.499 11:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.499 11:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.499 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.500 [2024-12-06 11:51:34.019365] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.500 [2024-12-06 11:51:34.082469] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:36.500 [2024-12-06 11:51:34.082603] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:36.500 [2024-12-06 11:51:34.094040] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.500 [2024-12-06 11:51:34.094168] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:36.500 [2024-12-06 11:51:34.094214] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.500 BaseBdev2 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.500 [ 00:08:36.500 { 00:08:36.500 "name": "BaseBdev2", 00:08:36.500 "aliases": [ 00:08:36.500 "467045f7-507b-429f-b9f9-ff39f80d5e5f" 00:08:36.500 ], 00:08:36.500 "product_name": "Malloc disk", 00:08:36.500 "block_size": 512, 00:08:36.500 "num_blocks": 65536, 00:08:36.500 "uuid": "467045f7-507b-429f-b9f9-ff39f80d5e5f", 00:08:36.500 "assigned_rate_limits": { 00:08:36.500 "rw_ios_per_sec": 0, 00:08:36.500 "rw_mbytes_per_sec": 0, 00:08:36.500 "r_mbytes_per_sec": 0, 00:08:36.500 "w_mbytes_per_sec": 0 00:08:36.500 }, 00:08:36.500 "claimed": false, 00:08:36.500 "zoned": false, 00:08:36.500 "supported_io_types": { 00:08:36.500 "read": true, 00:08:36.500 "write": true, 00:08:36.500 "unmap": true, 00:08:36.500 "flush": true, 00:08:36.500 "reset": true, 00:08:36.500 "nvme_admin": false, 00:08:36.500 "nvme_io": false, 00:08:36.500 "nvme_io_md": false, 00:08:36.500 "write_zeroes": true, 00:08:36.500 "zcopy": true, 00:08:36.500 "get_zone_info": false, 00:08:36.500 "zone_management": false, 00:08:36.500 "zone_append": false, 00:08:36.500 "compare": false, 00:08:36.500 "compare_and_write": false, 00:08:36.500 "abort": true, 00:08:36.500 "seek_hole": false, 00:08:36.500 "seek_data": false, 00:08:36.500 "copy": true, 00:08:36.500 "nvme_iov_md": false 00:08:36.500 }, 00:08:36.500 "memory_domains": [ 00:08:36.500 { 00:08:36.500 "dma_device_id": "system", 00:08:36.500 "dma_device_type": 1 00:08:36.500 }, 00:08:36.500 { 00:08:36.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.500 "dma_device_type": 2 00:08:36.500 } 00:08:36.500 ], 00:08:36.500 "driver_specific": {} 00:08:36.500 } 00:08:36.500 ] 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.500 BaseBdev3 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.500 [ 00:08:36.500 { 00:08:36.500 "name": "BaseBdev3", 00:08:36.500 "aliases": [ 00:08:36.500 "f119252e-a91e-4820-b3bd-8de71cce2125" 00:08:36.500 ], 00:08:36.500 "product_name": "Malloc disk", 00:08:36.500 "block_size": 512, 00:08:36.500 "num_blocks": 65536, 00:08:36.500 "uuid": "f119252e-a91e-4820-b3bd-8de71cce2125", 00:08:36.500 "assigned_rate_limits": { 00:08:36.500 "rw_ios_per_sec": 0, 00:08:36.500 "rw_mbytes_per_sec": 0, 00:08:36.500 "r_mbytes_per_sec": 0, 00:08:36.500 "w_mbytes_per_sec": 0 00:08:36.500 }, 00:08:36.500 "claimed": false, 00:08:36.500 "zoned": false, 00:08:36.500 "supported_io_types": { 00:08:36.500 "read": true, 00:08:36.500 "write": true, 00:08:36.500 "unmap": true, 00:08:36.500 "flush": true, 00:08:36.500 "reset": true, 00:08:36.500 "nvme_admin": false, 00:08:36.500 "nvme_io": false, 00:08:36.500 "nvme_io_md": false, 00:08:36.500 "write_zeroes": true, 00:08:36.500 "zcopy": true, 00:08:36.500 "get_zone_info": false, 00:08:36.500 "zone_management": false, 00:08:36.500 "zone_append": false, 00:08:36.500 "compare": false, 00:08:36.500 "compare_and_write": false, 00:08:36.500 "abort": true, 00:08:36.500 "seek_hole": false, 00:08:36.500 "seek_data": false, 00:08:36.500 "copy": true, 00:08:36.500 "nvme_iov_md": false 00:08:36.500 }, 00:08:36.500 "memory_domains": [ 00:08:36.500 { 00:08:36.500 "dma_device_id": "system", 00:08:36.500 "dma_device_type": 1 00:08:36.500 }, 00:08:36.500 { 00:08:36.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.500 "dma_device_type": 2 00:08:36.500 } 00:08:36.500 ], 00:08:36.500 "driver_specific": {} 00:08:36.500 } 00:08:36.500 ] 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.500 [2024-12-06 11:51:34.245199] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:36.500 [2024-12-06 11:51:34.245289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:36.500 [2024-12-06 11:51:34.245328] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:36.500 [2024-12-06 11:51:34.247091] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.500 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.760 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.760 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.760 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.760 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.760 "name": "Existed_Raid", 00:08:36.760 "uuid": "d49e877a-a425-46b2-954e-0322236c2ccd", 00:08:36.760 "strip_size_kb": 0, 00:08:36.760 "state": "configuring", 00:08:36.760 "raid_level": "raid1", 00:08:36.760 "superblock": true, 00:08:36.760 "num_base_bdevs": 3, 00:08:36.760 "num_base_bdevs_discovered": 2, 00:08:36.760 "num_base_bdevs_operational": 3, 00:08:36.760 "base_bdevs_list": [ 00:08:36.760 { 00:08:36.760 "name": "BaseBdev1", 00:08:36.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.760 "is_configured": false, 00:08:36.760 "data_offset": 0, 00:08:36.760 "data_size": 0 00:08:36.760 }, 00:08:36.760 { 00:08:36.760 "name": "BaseBdev2", 00:08:36.760 "uuid": "467045f7-507b-429f-b9f9-ff39f80d5e5f", 00:08:36.760 "is_configured": true, 00:08:36.760 "data_offset": 2048, 00:08:36.760 "data_size": 63488 00:08:36.760 }, 00:08:36.760 { 00:08:36.760 "name": "BaseBdev3", 00:08:36.760 "uuid": "f119252e-a91e-4820-b3bd-8de71cce2125", 00:08:36.760 "is_configured": true, 00:08:36.760 "data_offset": 2048, 00:08:36.760 "data_size": 63488 00:08:36.760 } 00:08:36.760 ] 00:08:36.760 }' 00:08:36.760 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.760 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.019 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:37.020 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.020 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.020 [2024-12-06 11:51:34.656448] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:37.020 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.020 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:37.020 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.020 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.020 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:37.020 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:37.020 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.020 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.020 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.020 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.020 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.020 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.020 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.020 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.020 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.020 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.020 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.020 "name": "Existed_Raid", 00:08:37.020 "uuid": "d49e877a-a425-46b2-954e-0322236c2ccd", 00:08:37.020 "strip_size_kb": 0, 00:08:37.020 "state": "configuring", 00:08:37.020 "raid_level": "raid1", 00:08:37.020 "superblock": true, 00:08:37.020 "num_base_bdevs": 3, 00:08:37.020 "num_base_bdevs_discovered": 1, 00:08:37.020 "num_base_bdevs_operational": 3, 00:08:37.020 "base_bdevs_list": [ 00:08:37.020 { 00:08:37.020 "name": "BaseBdev1", 00:08:37.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.020 "is_configured": false, 00:08:37.020 "data_offset": 0, 00:08:37.020 "data_size": 0 00:08:37.020 }, 00:08:37.020 { 00:08:37.020 "name": null, 00:08:37.020 "uuid": "467045f7-507b-429f-b9f9-ff39f80d5e5f", 00:08:37.020 "is_configured": false, 00:08:37.020 "data_offset": 0, 00:08:37.020 "data_size": 63488 00:08:37.020 }, 00:08:37.020 { 00:08:37.020 "name": "BaseBdev3", 00:08:37.020 "uuid": "f119252e-a91e-4820-b3bd-8de71cce2125", 00:08:37.020 "is_configured": true, 00:08:37.020 "data_offset": 2048, 00:08:37.020 "data_size": 63488 00:08:37.020 } 00:08:37.020 ] 00:08:37.020 }' 00:08:37.020 11:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.020 11:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.589 [2024-12-06 11:51:35.190491] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:37.589 BaseBdev1 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.589 [ 00:08:37.589 { 00:08:37.589 "name": "BaseBdev1", 00:08:37.589 "aliases": [ 00:08:37.589 "4c037f16-4d6d-46c6-bb20-1d0828f000a4" 00:08:37.589 ], 00:08:37.589 "product_name": "Malloc disk", 00:08:37.589 "block_size": 512, 00:08:37.589 "num_blocks": 65536, 00:08:37.589 "uuid": "4c037f16-4d6d-46c6-bb20-1d0828f000a4", 00:08:37.589 "assigned_rate_limits": { 00:08:37.589 "rw_ios_per_sec": 0, 00:08:37.589 "rw_mbytes_per_sec": 0, 00:08:37.589 "r_mbytes_per_sec": 0, 00:08:37.589 "w_mbytes_per_sec": 0 00:08:37.589 }, 00:08:37.589 "claimed": true, 00:08:37.589 "claim_type": "exclusive_write", 00:08:37.589 "zoned": false, 00:08:37.589 "supported_io_types": { 00:08:37.589 "read": true, 00:08:37.589 "write": true, 00:08:37.589 "unmap": true, 00:08:37.589 "flush": true, 00:08:37.589 "reset": true, 00:08:37.589 "nvme_admin": false, 00:08:37.589 "nvme_io": false, 00:08:37.589 "nvme_io_md": false, 00:08:37.589 "write_zeroes": true, 00:08:37.589 "zcopy": true, 00:08:37.589 "get_zone_info": false, 00:08:37.589 "zone_management": false, 00:08:37.589 "zone_append": false, 00:08:37.589 "compare": false, 00:08:37.589 "compare_and_write": false, 00:08:37.589 "abort": true, 00:08:37.589 "seek_hole": false, 00:08:37.589 "seek_data": false, 00:08:37.589 "copy": true, 00:08:37.589 "nvme_iov_md": false 00:08:37.589 }, 00:08:37.589 "memory_domains": [ 00:08:37.589 { 00:08:37.589 "dma_device_id": "system", 00:08:37.589 "dma_device_type": 1 00:08:37.589 }, 00:08:37.589 { 00:08:37.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.589 "dma_device_type": 2 00:08:37.589 } 00:08:37.589 ], 00:08:37.589 "driver_specific": {} 00:08:37.589 } 00:08:37.589 ] 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.589 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.589 "name": "Existed_Raid", 00:08:37.589 "uuid": "d49e877a-a425-46b2-954e-0322236c2ccd", 00:08:37.589 "strip_size_kb": 0, 00:08:37.590 "state": "configuring", 00:08:37.590 "raid_level": "raid1", 00:08:37.590 "superblock": true, 00:08:37.590 "num_base_bdevs": 3, 00:08:37.590 "num_base_bdevs_discovered": 2, 00:08:37.590 "num_base_bdevs_operational": 3, 00:08:37.590 "base_bdevs_list": [ 00:08:37.590 { 00:08:37.590 "name": "BaseBdev1", 00:08:37.590 "uuid": "4c037f16-4d6d-46c6-bb20-1d0828f000a4", 00:08:37.590 "is_configured": true, 00:08:37.590 "data_offset": 2048, 00:08:37.590 "data_size": 63488 00:08:37.590 }, 00:08:37.590 { 00:08:37.590 "name": null, 00:08:37.590 "uuid": "467045f7-507b-429f-b9f9-ff39f80d5e5f", 00:08:37.590 "is_configured": false, 00:08:37.590 "data_offset": 0, 00:08:37.590 "data_size": 63488 00:08:37.590 }, 00:08:37.590 { 00:08:37.590 "name": "BaseBdev3", 00:08:37.590 "uuid": "f119252e-a91e-4820-b3bd-8de71cce2125", 00:08:37.590 "is_configured": true, 00:08:37.590 "data_offset": 2048, 00:08:37.590 "data_size": 63488 00:08:37.590 } 00:08:37.590 ] 00:08:37.590 }' 00:08:37.590 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.590 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.213 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.213 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.213 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.213 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:38.213 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.213 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:38.213 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:38.213 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.213 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.213 [2024-12-06 11:51:35.709654] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:38.213 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.213 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:38.213 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.213 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.213 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:38.213 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:38.213 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.213 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.213 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.213 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.214 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.214 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.214 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.214 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.214 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.214 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.214 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.214 "name": "Existed_Raid", 00:08:38.214 "uuid": "d49e877a-a425-46b2-954e-0322236c2ccd", 00:08:38.214 "strip_size_kb": 0, 00:08:38.214 "state": "configuring", 00:08:38.214 "raid_level": "raid1", 00:08:38.214 "superblock": true, 00:08:38.214 "num_base_bdevs": 3, 00:08:38.214 "num_base_bdevs_discovered": 1, 00:08:38.214 "num_base_bdevs_operational": 3, 00:08:38.214 "base_bdevs_list": [ 00:08:38.214 { 00:08:38.214 "name": "BaseBdev1", 00:08:38.214 "uuid": "4c037f16-4d6d-46c6-bb20-1d0828f000a4", 00:08:38.214 "is_configured": true, 00:08:38.214 "data_offset": 2048, 00:08:38.214 "data_size": 63488 00:08:38.214 }, 00:08:38.214 { 00:08:38.214 "name": null, 00:08:38.214 "uuid": "467045f7-507b-429f-b9f9-ff39f80d5e5f", 00:08:38.214 "is_configured": false, 00:08:38.214 "data_offset": 0, 00:08:38.214 "data_size": 63488 00:08:38.214 }, 00:08:38.214 { 00:08:38.214 "name": null, 00:08:38.214 "uuid": "f119252e-a91e-4820-b3bd-8de71cce2125", 00:08:38.214 "is_configured": false, 00:08:38.214 "data_offset": 0, 00:08:38.214 "data_size": 63488 00:08:38.214 } 00:08:38.214 ] 00:08:38.214 }' 00:08:38.214 11:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.214 11:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.473 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.473 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:38.473 11:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.473 11:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.473 11:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.473 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:38.473 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:38.473 11:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.473 11:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.473 [2024-12-06 11:51:36.208834] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:38.473 11:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.473 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:38.473 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.473 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.473 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:38.473 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:38.473 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.473 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.473 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.473 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.473 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.473 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.473 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.473 11:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.473 11:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.732 11:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.732 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.732 "name": "Existed_Raid", 00:08:38.732 "uuid": "d49e877a-a425-46b2-954e-0322236c2ccd", 00:08:38.732 "strip_size_kb": 0, 00:08:38.732 "state": "configuring", 00:08:38.732 "raid_level": "raid1", 00:08:38.732 "superblock": true, 00:08:38.732 "num_base_bdevs": 3, 00:08:38.732 "num_base_bdevs_discovered": 2, 00:08:38.732 "num_base_bdevs_operational": 3, 00:08:38.732 "base_bdevs_list": [ 00:08:38.732 { 00:08:38.732 "name": "BaseBdev1", 00:08:38.732 "uuid": "4c037f16-4d6d-46c6-bb20-1d0828f000a4", 00:08:38.732 "is_configured": true, 00:08:38.732 "data_offset": 2048, 00:08:38.732 "data_size": 63488 00:08:38.732 }, 00:08:38.732 { 00:08:38.732 "name": null, 00:08:38.732 "uuid": "467045f7-507b-429f-b9f9-ff39f80d5e5f", 00:08:38.732 "is_configured": false, 00:08:38.732 "data_offset": 0, 00:08:38.732 "data_size": 63488 00:08:38.732 }, 00:08:38.732 { 00:08:38.732 "name": "BaseBdev3", 00:08:38.732 "uuid": "f119252e-a91e-4820-b3bd-8de71cce2125", 00:08:38.732 "is_configured": true, 00:08:38.732 "data_offset": 2048, 00:08:38.732 "data_size": 63488 00:08:38.732 } 00:08:38.732 ] 00:08:38.732 }' 00:08:38.732 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.732 11:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.992 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.992 11:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.992 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:38.992 11:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.992 11:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.992 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:38.992 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:38.992 11:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.992 11:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.992 [2024-12-06 11:51:36.660082] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:38.992 11:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.992 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:38.992 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.992 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.992 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:38.992 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:38.992 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.992 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.992 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.992 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.992 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.992 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.992 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.992 11:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.992 11:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.992 11:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.992 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.992 "name": "Existed_Raid", 00:08:38.992 "uuid": "d49e877a-a425-46b2-954e-0322236c2ccd", 00:08:38.992 "strip_size_kb": 0, 00:08:38.992 "state": "configuring", 00:08:38.992 "raid_level": "raid1", 00:08:38.992 "superblock": true, 00:08:38.992 "num_base_bdevs": 3, 00:08:38.992 "num_base_bdevs_discovered": 1, 00:08:38.992 "num_base_bdevs_operational": 3, 00:08:38.992 "base_bdevs_list": [ 00:08:38.992 { 00:08:38.992 "name": null, 00:08:38.992 "uuid": "4c037f16-4d6d-46c6-bb20-1d0828f000a4", 00:08:38.992 "is_configured": false, 00:08:38.992 "data_offset": 0, 00:08:38.992 "data_size": 63488 00:08:38.992 }, 00:08:38.992 { 00:08:38.992 "name": null, 00:08:38.992 "uuid": "467045f7-507b-429f-b9f9-ff39f80d5e5f", 00:08:38.992 "is_configured": false, 00:08:38.992 "data_offset": 0, 00:08:38.992 "data_size": 63488 00:08:38.992 }, 00:08:38.992 { 00:08:38.992 "name": "BaseBdev3", 00:08:38.992 "uuid": "f119252e-a91e-4820-b3bd-8de71cce2125", 00:08:38.992 "is_configured": true, 00:08:38.992 "data_offset": 2048, 00:08:38.992 "data_size": 63488 00:08:38.992 } 00:08:38.992 ] 00:08:38.992 }' 00:08:38.992 11:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.992 11:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.563 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.563 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:39.563 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.563 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.563 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.563 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:39.563 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:39.563 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.563 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.563 [2024-12-06 11:51:37.129787] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:39.563 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.563 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:39.563 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.563 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.563 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:39.563 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:39.563 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.563 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.563 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.563 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.563 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.563 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.563 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.563 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.563 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.563 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.563 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.563 "name": "Existed_Raid", 00:08:39.563 "uuid": "d49e877a-a425-46b2-954e-0322236c2ccd", 00:08:39.563 "strip_size_kb": 0, 00:08:39.563 "state": "configuring", 00:08:39.563 "raid_level": "raid1", 00:08:39.563 "superblock": true, 00:08:39.563 "num_base_bdevs": 3, 00:08:39.563 "num_base_bdevs_discovered": 2, 00:08:39.563 "num_base_bdevs_operational": 3, 00:08:39.563 "base_bdevs_list": [ 00:08:39.563 { 00:08:39.563 "name": null, 00:08:39.563 "uuid": "4c037f16-4d6d-46c6-bb20-1d0828f000a4", 00:08:39.563 "is_configured": false, 00:08:39.563 "data_offset": 0, 00:08:39.563 "data_size": 63488 00:08:39.563 }, 00:08:39.563 { 00:08:39.563 "name": "BaseBdev2", 00:08:39.563 "uuid": "467045f7-507b-429f-b9f9-ff39f80d5e5f", 00:08:39.563 "is_configured": true, 00:08:39.563 "data_offset": 2048, 00:08:39.563 "data_size": 63488 00:08:39.563 }, 00:08:39.563 { 00:08:39.563 "name": "BaseBdev3", 00:08:39.563 "uuid": "f119252e-a91e-4820-b3bd-8de71cce2125", 00:08:39.563 "is_configured": true, 00:08:39.563 "data_offset": 2048, 00:08:39.563 "data_size": 63488 00:08:39.563 } 00:08:39.563 ] 00:08:39.563 }' 00:08:39.563 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.563 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.823 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:39.823 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.823 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.823 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.823 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.823 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:39.823 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:39.823 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.823 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.823 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.823 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.083 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4c037f16-4d6d-46c6-bb20-1d0828f000a4 00:08:40.083 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.083 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.083 [2024-12-06 11:51:37.615731] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:40.083 [2024-12-06 11:51:37.615973] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:40.083 [2024-12-06 11:51:37.616026] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:40.083 [2024-12-06 11:51:37.616307] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:40.083 [2024-12-06 11:51:37.616453] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:40.083 NewBaseBdev 00:08:40.083 [2024-12-06 11:51:37.616510] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:40.083 [2024-12-06 11:51:37.616643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.083 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.083 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:40.083 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:40.083 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:40.083 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:40.084 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:40.084 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:40.084 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:40.084 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.084 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.084 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.084 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:40.084 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.084 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.084 [ 00:08:40.084 { 00:08:40.084 "name": "NewBaseBdev", 00:08:40.084 "aliases": [ 00:08:40.084 "4c037f16-4d6d-46c6-bb20-1d0828f000a4" 00:08:40.084 ], 00:08:40.084 "product_name": "Malloc disk", 00:08:40.084 "block_size": 512, 00:08:40.084 "num_blocks": 65536, 00:08:40.084 "uuid": "4c037f16-4d6d-46c6-bb20-1d0828f000a4", 00:08:40.084 "assigned_rate_limits": { 00:08:40.084 "rw_ios_per_sec": 0, 00:08:40.084 "rw_mbytes_per_sec": 0, 00:08:40.084 "r_mbytes_per_sec": 0, 00:08:40.084 "w_mbytes_per_sec": 0 00:08:40.084 }, 00:08:40.084 "claimed": true, 00:08:40.084 "claim_type": "exclusive_write", 00:08:40.084 "zoned": false, 00:08:40.084 "supported_io_types": { 00:08:40.084 "read": true, 00:08:40.084 "write": true, 00:08:40.084 "unmap": true, 00:08:40.084 "flush": true, 00:08:40.084 "reset": true, 00:08:40.084 "nvme_admin": false, 00:08:40.084 "nvme_io": false, 00:08:40.084 "nvme_io_md": false, 00:08:40.084 "write_zeroes": true, 00:08:40.084 "zcopy": true, 00:08:40.084 "get_zone_info": false, 00:08:40.084 "zone_management": false, 00:08:40.084 "zone_append": false, 00:08:40.084 "compare": false, 00:08:40.084 "compare_and_write": false, 00:08:40.084 "abort": true, 00:08:40.084 "seek_hole": false, 00:08:40.084 "seek_data": false, 00:08:40.084 "copy": true, 00:08:40.084 "nvme_iov_md": false 00:08:40.084 }, 00:08:40.084 "memory_domains": [ 00:08:40.084 { 00:08:40.084 "dma_device_id": "system", 00:08:40.084 "dma_device_type": 1 00:08:40.084 }, 00:08:40.084 { 00:08:40.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.084 "dma_device_type": 2 00:08:40.084 } 00:08:40.084 ], 00:08:40.084 "driver_specific": {} 00:08:40.084 } 00:08:40.084 ] 00:08:40.084 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.084 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:40.084 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:40.084 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.084 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.084 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:40.084 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:40.084 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.084 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.084 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.084 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.084 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.084 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.084 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.084 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.084 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.084 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.084 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.084 "name": "Existed_Raid", 00:08:40.084 "uuid": "d49e877a-a425-46b2-954e-0322236c2ccd", 00:08:40.084 "strip_size_kb": 0, 00:08:40.084 "state": "online", 00:08:40.084 "raid_level": "raid1", 00:08:40.084 "superblock": true, 00:08:40.084 "num_base_bdevs": 3, 00:08:40.084 "num_base_bdevs_discovered": 3, 00:08:40.084 "num_base_bdevs_operational": 3, 00:08:40.084 "base_bdevs_list": [ 00:08:40.084 { 00:08:40.084 "name": "NewBaseBdev", 00:08:40.084 "uuid": "4c037f16-4d6d-46c6-bb20-1d0828f000a4", 00:08:40.084 "is_configured": true, 00:08:40.084 "data_offset": 2048, 00:08:40.084 "data_size": 63488 00:08:40.084 }, 00:08:40.084 { 00:08:40.084 "name": "BaseBdev2", 00:08:40.084 "uuid": "467045f7-507b-429f-b9f9-ff39f80d5e5f", 00:08:40.084 "is_configured": true, 00:08:40.084 "data_offset": 2048, 00:08:40.084 "data_size": 63488 00:08:40.084 }, 00:08:40.084 { 00:08:40.084 "name": "BaseBdev3", 00:08:40.084 "uuid": "f119252e-a91e-4820-b3bd-8de71cce2125", 00:08:40.084 "is_configured": true, 00:08:40.084 "data_offset": 2048, 00:08:40.084 "data_size": 63488 00:08:40.084 } 00:08:40.084 ] 00:08:40.084 }' 00:08:40.084 11:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.084 11:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.654 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:40.654 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:40.654 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:40.654 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:40.654 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:40.654 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:40.654 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:40.654 11:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.654 11:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.654 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:40.654 [2024-12-06 11:51:38.131588] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:40.654 11:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.654 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:40.654 "name": "Existed_Raid", 00:08:40.654 "aliases": [ 00:08:40.654 "d49e877a-a425-46b2-954e-0322236c2ccd" 00:08:40.654 ], 00:08:40.654 "product_name": "Raid Volume", 00:08:40.654 "block_size": 512, 00:08:40.654 "num_blocks": 63488, 00:08:40.654 "uuid": "d49e877a-a425-46b2-954e-0322236c2ccd", 00:08:40.654 "assigned_rate_limits": { 00:08:40.654 "rw_ios_per_sec": 0, 00:08:40.654 "rw_mbytes_per_sec": 0, 00:08:40.654 "r_mbytes_per_sec": 0, 00:08:40.654 "w_mbytes_per_sec": 0 00:08:40.654 }, 00:08:40.654 "claimed": false, 00:08:40.654 "zoned": false, 00:08:40.654 "supported_io_types": { 00:08:40.654 "read": true, 00:08:40.654 "write": true, 00:08:40.654 "unmap": false, 00:08:40.654 "flush": false, 00:08:40.654 "reset": true, 00:08:40.654 "nvme_admin": false, 00:08:40.654 "nvme_io": false, 00:08:40.654 "nvme_io_md": false, 00:08:40.654 "write_zeroes": true, 00:08:40.654 "zcopy": false, 00:08:40.654 "get_zone_info": false, 00:08:40.654 "zone_management": false, 00:08:40.654 "zone_append": false, 00:08:40.654 "compare": false, 00:08:40.654 "compare_and_write": false, 00:08:40.654 "abort": false, 00:08:40.654 "seek_hole": false, 00:08:40.654 "seek_data": false, 00:08:40.654 "copy": false, 00:08:40.654 "nvme_iov_md": false 00:08:40.654 }, 00:08:40.654 "memory_domains": [ 00:08:40.654 { 00:08:40.654 "dma_device_id": "system", 00:08:40.654 "dma_device_type": 1 00:08:40.654 }, 00:08:40.654 { 00:08:40.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.654 "dma_device_type": 2 00:08:40.654 }, 00:08:40.654 { 00:08:40.654 "dma_device_id": "system", 00:08:40.654 "dma_device_type": 1 00:08:40.654 }, 00:08:40.654 { 00:08:40.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.654 "dma_device_type": 2 00:08:40.654 }, 00:08:40.654 { 00:08:40.654 "dma_device_id": "system", 00:08:40.654 "dma_device_type": 1 00:08:40.654 }, 00:08:40.654 { 00:08:40.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.654 "dma_device_type": 2 00:08:40.654 } 00:08:40.654 ], 00:08:40.654 "driver_specific": { 00:08:40.654 "raid": { 00:08:40.654 "uuid": "d49e877a-a425-46b2-954e-0322236c2ccd", 00:08:40.654 "strip_size_kb": 0, 00:08:40.654 "state": "online", 00:08:40.654 "raid_level": "raid1", 00:08:40.654 "superblock": true, 00:08:40.654 "num_base_bdevs": 3, 00:08:40.654 "num_base_bdevs_discovered": 3, 00:08:40.654 "num_base_bdevs_operational": 3, 00:08:40.654 "base_bdevs_list": [ 00:08:40.654 { 00:08:40.654 "name": "NewBaseBdev", 00:08:40.654 "uuid": "4c037f16-4d6d-46c6-bb20-1d0828f000a4", 00:08:40.654 "is_configured": true, 00:08:40.654 "data_offset": 2048, 00:08:40.654 "data_size": 63488 00:08:40.654 }, 00:08:40.654 { 00:08:40.654 "name": "BaseBdev2", 00:08:40.654 "uuid": "467045f7-507b-429f-b9f9-ff39f80d5e5f", 00:08:40.654 "is_configured": true, 00:08:40.654 "data_offset": 2048, 00:08:40.654 "data_size": 63488 00:08:40.654 }, 00:08:40.655 { 00:08:40.655 "name": "BaseBdev3", 00:08:40.655 "uuid": "f119252e-a91e-4820-b3bd-8de71cce2125", 00:08:40.655 "is_configured": true, 00:08:40.655 "data_offset": 2048, 00:08:40.655 "data_size": 63488 00:08:40.655 } 00:08:40.655 ] 00:08:40.655 } 00:08:40.655 } 00:08:40.655 }' 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:40.655 BaseBdev2 00:08:40.655 BaseBdev3' 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.655 [2024-12-06 11:51:38.391320] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:40.655 [2024-12-06 11:51:38.391344] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:40.655 [2024-12-06 11:51:38.391403] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:40.655 [2024-12-06 11:51:38.391641] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:40.655 [2024-12-06 11:51:38.391650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 78737 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 78737 ']' 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 78737 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:40.655 11:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78737 00:08:40.915 killing process with pid 78737 00:08:40.915 11:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:40.915 11:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:40.915 11:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78737' 00:08:40.915 11:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 78737 00:08:40.915 [2024-12-06 11:51:38.439189] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:40.915 11:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 78737 00:08:40.915 [2024-12-06 11:51:38.470600] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:41.174 11:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:41.174 00:08:41.174 real 0m8.774s 00:08:41.174 user 0m14.995s 00:08:41.174 sys 0m1.769s 00:08:41.174 11:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:41.174 11:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.174 ************************************ 00:08:41.174 END TEST raid_state_function_test_sb 00:08:41.174 ************************************ 00:08:41.174 11:51:38 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:08:41.174 11:51:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:41.174 11:51:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:41.174 11:51:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:41.174 ************************************ 00:08:41.174 START TEST raid_superblock_test 00:08:41.174 ************************************ 00:08:41.174 11:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:08:41.174 11:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:41.174 11:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:41.174 11:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:41.174 11:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:41.174 11:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:41.174 11:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:41.174 11:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:41.174 11:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:41.174 11:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:41.174 11:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:41.174 11:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:41.174 11:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:41.174 11:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:41.174 11:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:41.174 11:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:41.174 11:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79341 00:08:41.174 11:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:41.174 11:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79341 00:08:41.174 11:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 79341 ']' 00:08:41.174 11:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.174 11:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:41.174 11:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.174 11:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:41.174 11:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.174 [2024-12-06 11:51:38.858809] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:41.174 [2024-12-06 11:51:38.859016] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79341 ] 00:08:41.433 [2024-12-06 11:51:39.002520] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.433 [2024-12-06 11:51:39.045851] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.433 [2024-12-06 11:51:39.086975] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.433 [2024-12-06 11:51:39.087092] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.002 malloc1 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.002 [2024-12-06 11:51:39.708120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:42.002 [2024-12-06 11:51:39.708223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.002 [2024-12-06 11:51:39.708270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:08:42.002 [2024-12-06 11:51:39.708320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.002 [2024-12-06 11:51:39.710314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.002 [2024-12-06 11:51:39.710397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:42.002 pt1 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.002 malloc2 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.002 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:42.003 11:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.003 11:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.003 [2024-12-06 11:51:39.755048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:42.003 [2024-12-06 11:51:39.755273] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.003 [2024-12-06 11:51:39.755360] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:42.003 [2024-12-06 11:51:39.755442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.262 [2024-12-06 11:51:39.760192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.262 [2024-12-06 11:51:39.760370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:42.262 pt2 00:08:42.262 11:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.262 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:42.262 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:42.262 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:42.262 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:42.262 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:42.262 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:42.262 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:42.262 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:42.262 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:42.262 11:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.262 11:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.262 malloc3 00:08:42.262 11:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.262 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:42.262 11:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.262 11:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.262 [2024-12-06 11:51:39.789827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:42.262 [2024-12-06 11:51:39.789926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.262 [2024-12-06 11:51:39.789960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:42.262 [2024-12-06 11:51:39.789989] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.262 [2024-12-06 11:51:39.792001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.262 [2024-12-06 11:51:39.792072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:42.262 pt3 00:08:42.262 11:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.262 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:42.262 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:42.262 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:42.262 11:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.262 11:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.262 [2024-12-06 11:51:39.801876] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:42.262 [2024-12-06 11:51:39.803714] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:42.262 [2024-12-06 11:51:39.803820] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:42.262 [2024-12-06 11:51:39.804002] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:08:42.262 [2024-12-06 11:51:39.804048] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:42.263 [2024-12-06 11:51:39.804325] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:42.263 [2024-12-06 11:51:39.804496] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:08:42.263 [2024-12-06 11:51:39.804515] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:08:42.263 [2024-12-06 11:51:39.804634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.263 11:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.263 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:42.263 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:42.263 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:42.263 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:42.263 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:42.263 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.263 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.263 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.263 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.263 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.263 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.263 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.263 11:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.263 11:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.263 11:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.263 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.263 "name": "raid_bdev1", 00:08:42.263 "uuid": "2c6e92ed-979d-42c6-9adb-3350cfb8931c", 00:08:42.263 "strip_size_kb": 0, 00:08:42.263 "state": "online", 00:08:42.263 "raid_level": "raid1", 00:08:42.263 "superblock": true, 00:08:42.263 "num_base_bdevs": 3, 00:08:42.263 "num_base_bdevs_discovered": 3, 00:08:42.263 "num_base_bdevs_operational": 3, 00:08:42.263 "base_bdevs_list": [ 00:08:42.263 { 00:08:42.263 "name": "pt1", 00:08:42.263 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:42.263 "is_configured": true, 00:08:42.263 "data_offset": 2048, 00:08:42.263 "data_size": 63488 00:08:42.263 }, 00:08:42.263 { 00:08:42.263 "name": "pt2", 00:08:42.263 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:42.263 "is_configured": true, 00:08:42.263 "data_offset": 2048, 00:08:42.263 "data_size": 63488 00:08:42.263 }, 00:08:42.263 { 00:08:42.263 "name": "pt3", 00:08:42.263 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:42.263 "is_configured": true, 00:08:42.263 "data_offset": 2048, 00:08:42.263 "data_size": 63488 00:08:42.263 } 00:08:42.263 ] 00:08:42.263 }' 00:08:42.263 11:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.263 11:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.832 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:42.832 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:42.832 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:42.832 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:42.832 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:42.832 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:42.832 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:42.832 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:42.832 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.832 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.832 [2024-12-06 11:51:40.297231] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.832 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.832 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:42.832 "name": "raid_bdev1", 00:08:42.832 "aliases": [ 00:08:42.832 "2c6e92ed-979d-42c6-9adb-3350cfb8931c" 00:08:42.832 ], 00:08:42.832 "product_name": "Raid Volume", 00:08:42.832 "block_size": 512, 00:08:42.832 "num_blocks": 63488, 00:08:42.832 "uuid": "2c6e92ed-979d-42c6-9adb-3350cfb8931c", 00:08:42.832 "assigned_rate_limits": { 00:08:42.832 "rw_ios_per_sec": 0, 00:08:42.832 "rw_mbytes_per_sec": 0, 00:08:42.832 "r_mbytes_per_sec": 0, 00:08:42.832 "w_mbytes_per_sec": 0 00:08:42.832 }, 00:08:42.832 "claimed": false, 00:08:42.832 "zoned": false, 00:08:42.832 "supported_io_types": { 00:08:42.832 "read": true, 00:08:42.832 "write": true, 00:08:42.832 "unmap": false, 00:08:42.832 "flush": false, 00:08:42.832 "reset": true, 00:08:42.832 "nvme_admin": false, 00:08:42.832 "nvme_io": false, 00:08:42.832 "nvme_io_md": false, 00:08:42.832 "write_zeroes": true, 00:08:42.832 "zcopy": false, 00:08:42.832 "get_zone_info": false, 00:08:42.832 "zone_management": false, 00:08:42.832 "zone_append": false, 00:08:42.832 "compare": false, 00:08:42.832 "compare_and_write": false, 00:08:42.832 "abort": false, 00:08:42.832 "seek_hole": false, 00:08:42.832 "seek_data": false, 00:08:42.832 "copy": false, 00:08:42.832 "nvme_iov_md": false 00:08:42.832 }, 00:08:42.832 "memory_domains": [ 00:08:42.832 { 00:08:42.832 "dma_device_id": "system", 00:08:42.832 "dma_device_type": 1 00:08:42.832 }, 00:08:42.832 { 00:08:42.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.832 "dma_device_type": 2 00:08:42.832 }, 00:08:42.832 { 00:08:42.832 "dma_device_id": "system", 00:08:42.832 "dma_device_type": 1 00:08:42.832 }, 00:08:42.832 { 00:08:42.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.832 "dma_device_type": 2 00:08:42.832 }, 00:08:42.832 { 00:08:42.832 "dma_device_id": "system", 00:08:42.832 "dma_device_type": 1 00:08:42.832 }, 00:08:42.832 { 00:08:42.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.832 "dma_device_type": 2 00:08:42.832 } 00:08:42.832 ], 00:08:42.832 "driver_specific": { 00:08:42.832 "raid": { 00:08:42.832 "uuid": "2c6e92ed-979d-42c6-9adb-3350cfb8931c", 00:08:42.832 "strip_size_kb": 0, 00:08:42.832 "state": "online", 00:08:42.832 "raid_level": "raid1", 00:08:42.832 "superblock": true, 00:08:42.832 "num_base_bdevs": 3, 00:08:42.832 "num_base_bdevs_discovered": 3, 00:08:42.832 "num_base_bdevs_operational": 3, 00:08:42.832 "base_bdevs_list": [ 00:08:42.832 { 00:08:42.832 "name": "pt1", 00:08:42.832 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:42.832 "is_configured": true, 00:08:42.832 "data_offset": 2048, 00:08:42.832 "data_size": 63488 00:08:42.832 }, 00:08:42.832 { 00:08:42.832 "name": "pt2", 00:08:42.832 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:42.832 "is_configured": true, 00:08:42.832 "data_offset": 2048, 00:08:42.832 "data_size": 63488 00:08:42.832 }, 00:08:42.832 { 00:08:42.832 "name": "pt3", 00:08:42.832 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:42.832 "is_configured": true, 00:08:42.832 "data_offset": 2048, 00:08:42.832 "data_size": 63488 00:08:42.832 } 00:08:42.832 ] 00:08:42.832 } 00:08:42.832 } 00:08:42.832 }' 00:08:42.832 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:42.832 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:42.832 pt2 00:08:42.832 pt3' 00:08:42.832 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.833 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:42.833 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.833 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:42.833 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.833 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.833 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.833 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.833 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.833 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.833 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.833 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:42.833 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.833 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.833 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.833 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.833 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.833 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.833 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.833 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.833 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:42.833 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.833 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.833 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.833 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.833 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.833 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:42.833 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:42.833 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.833 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.833 [2024-12-06 11:51:40.584706] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2c6e92ed-979d-42c6-9adb-3350cfb8931c 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2c6e92ed-979d-42c6-9adb-3350cfb8931c ']' 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.093 [2024-12-06 11:51:40.632391] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:43.093 [2024-12-06 11:51:40.632446] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:43.093 [2024-12-06 11:51:40.632556] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:43.093 [2024-12-06 11:51:40.632651] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:43.093 [2024-12-06 11:51:40.632698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:43.093 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.094 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:43.094 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.094 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.094 [2024-12-06 11:51:40.780162] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:43.094 [2024-12-06 11:51:40.781980] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:43.094 [2024-12-06 11:51:40.782069] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:43.094 [2024-12-06 11:51:40.782136] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:43.094 [2024-12-06 11:51:40.782250] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:43.094 [2024-12-06 11:51:40.782333] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:43.094 [2024-12-06 11:51:40.782378] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:43.094 [2024-12-06 11:51:40.782435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:08:43.094 request: 00:08:43.094 { 00:08:43.094 "name": "raid_bdev1", 00:08:43.094 "raid_level": "raid1", 00:08:43.094 "base_bdevs": [ 00:08:43.094 "malloc1", 00:08:43.094 "malloc2", 00:08:43.094 "malloc3" 00:08:43.094 ], 00:08:43.094 "superblock": false, 00:08:43.094 "method": "bdev_raid_create", 00:08:43.094 "req_id": 1 00:08:43.094 } 00:08:43.094 Got JSON-RPC error response 00:08:43.094 response: 00:08:43.094 { 00:08:43.094 "code": -17, 00:08:43.094 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:43.094 } 00:08:43.094 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:43.094 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:43.094 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:43.094 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:43.094 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:43.094 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.094 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.094 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.094 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:43.094 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.094 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:43.094 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:43.094 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:43.094 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.094 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.094 [2024-12-06 11:51:40.844018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:43.094 [2024-12-06 11:51:40.844101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.094 [2024-12-06 11:51:40.844132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:43.094 [2024-12-06 11:51:40.844164] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.094 [2024-12-06 11:51:40.846176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.094 [2024-12-06 11:51:40.846253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:43.094 [2024-12-06 11:51:40.846332] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:43.094 [2024-12-06 11:51:40.846403] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:43.094 pt1 00:08:43.094 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.354 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:08:43.354 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.354 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.354 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:43.354 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:43.354 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.354 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.354 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.354 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.354 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.354 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.354 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.354 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.354 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.354 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.354 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.354 "name": "raid_bdev1", 00:08:43.354 "uuid": "2c6e92ed-979d-42c6-9adb-3350cfb8931c", 00:08:43.354 "strip_size_kb": 0, 00:08:43.354 "state": "configuring", 00:08:43.354 "raid_level": "raid1", 00:08:43.354 "superblock": true, 00:08:43.354 "num_base_bdevs": 3, 00:08:43.354 "num_base_bdevs_discovered": 1, 00:08:43.354 "num_base_bdevs_operational": 3, 00:08:43.354 "base_bdevs_list": [ 00:08:43.354 { 00:08:43.354 "name": "pt1", 00:08:43.354 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.354 "is_configured": true, 00:08:43.354 "data_offset": 2048, 00:08:43.354 "data_size": 63488 00:08:43.354 }, 00:08:43.354 { 00:08:43.354 "name": null, 00:08:43.354 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.354 "is_configured": false, 00:08:43.354 "data_offset": 2048, 00:08:43.354 "data_size": 63488 00:08:43.354 }, 00:08:43.354 { 00:08:43.354 "name": null, 00:08:43.354 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:43.354 "is_configured": false, 00:08:43.354 "data_offset": 2048, 00:08:43.354 "data_size": 63488 00:08:43.354 } 00:08:43.354 ] 00:08:43.354 }' 00:08:43.354 11:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.354 11:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.614 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:43.614 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:43.614 11:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.614 11:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.614 [2024-12-06 11:51:41.295362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:43.614 [2024-12-06 11:51:41.295451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.614 [2024-12-06 11:51:41.295473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:43.614 [2024-12-06 11:51:41.295484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.614 [2024-12-06 11:51:41.295812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.614 [2024-12-06 11:51:41.295837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:43.614 [2024-12-06 11:51:41.295893] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:43.614 [2024-12-06 11:51:41.295914] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:43.614 pt2 00:08:43.614 11:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.614 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:43.614 11:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.614 11:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.614 [2024-12-06 11:51:41.303373] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:43.614 11:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.614 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:08:43.614 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.614 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.614 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:43.614 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:43.614 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.614 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.614 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.614 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.614 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.614 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.614 11:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.614 11:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.614 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.614 11:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.614 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.614 "name": "raid_bdev1", 00:08:43.614 "uuid": "2c6e92ed-979d-42c6-9adb-3350cfb8931c", 00:08:43.614 "strip_size_kb": 0, 00:08:43.614 "state": "configuring", 00:08:43.614 "raid_level": "raid1", 00:08:43.614 "superblock": true, 00:08:43.614 "num_base_bdevs": 3, 00:08:43.614 "num_base_bdevs_discovered": 1, 00:08:43.614 "num_base_bdevs_operational": 3, 00:08:43.614 "base_bdevs_list": [ 00:08:43.615 { 00:08:43.615 "name": "pt1", 00:08:43.615 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.615 "is_configured": true, 00:08:43.615 "data_offset": 2048, 00:08:43.615 "data_size": 63488 00:08:43.615 }, 00:08:43.615 { 00:08:43.615 "name": null, 00:08:43.615 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.615 "is_configured": false, 00:08:43.615 "data_offset": 0, 00:08:43.615 "data_size": 63488 00:08:43.615 }, 00:08:43.615 { 00:08:43.615 "name": null, 00:08:43.615 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:43.615 "is_configured": false, 00:08:43.615 "data_offset": 2048, 00:08:43.615 "data_size": 63488 00:08:43.615 } 00:08:43.615 ] 00:08:43.615 }' 00:08:43.615 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.615 11:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.185 [2024-12-06 11:51:41.751359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:44.185 [2024-12-06 11:51:41.751444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.185 [2024-12-06 11:51:41.751478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:44.185 [2024-12-06 11:51:41.751504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.185 [2024-12-06 11:51:41.751890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.185 [2024-12-06 11:51:41.751947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:44.185 [2024-12-06 11:51:41.752042] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:44.185 [2024-12-06 11:51:41.752066] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:44.185 pt2 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.185 [2024-12-06 11:51:41.759342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:44.185 [2024-12-06 11:51:41.759430] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.185 [2024-12-06 11:51:41.759462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:44.185 [2024-12-06 11:51:41.759487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.185 [2024-12-06 11:51:41.759781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.185 [2024-12-06 11:51:41.759804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:44.185 [2024-12-06 11:51:41.759856] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:44.185 [2024-12-06 11:51:41.759882] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:44.185 [2024-12-06 11:51:41.759979] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:44.185 [2024-12-06 11:51:41.759988] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:44.185 [2024-12-06 11:51:41.760193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:44.185 [2024-12-06 11:51:41.760312] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:44.185 [2024-12-06 11:51:41.760328] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:44.185 [2024-12-06 11:51:41.760419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.185 pt3 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.185 "name": "raid_bdev1", 00:08:44.185 "uuid": "2c6e92ed-979d-42c6-9adb-3350cfb8931c", 00:08:44.185 "strip_size_kb": 0, 00:08:44.185 "state": "online", 00:08:44.185 "raid_level": "raid1", 00:08:44.185 "superblock": true, 00:08:44.185 "num_base_bdevs": 3, 00:08:44.185 "num_base_bdevs_discovered": 3, 00:08:44.185 "num_base_bdevs_operational": 3, 00:08:44.185 "base_bdevs_list": [ 00:08:44.185 { 00:08:44.185 "name": "pt1", 00:08:44.185 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:44.185 "is_configured": true, 00:08:44.185 "data_offset": 2048, 00:08:44.185 "data_size": 63488 00:08:44.185 }, 00:08:44.185 { 00:08:44.185 "name": "pt2", 00:08:44.185 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.185 "is_configured": true, 00:08:44.185 "data_offset": 2048, 00:08:44.185 "data_size": 63488 00:08:44.185 }, 00:08:44.185 { 00:08:44.185 "name": "pt3", 00:08:44.185 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:44.185 "is_configured": true, 00:08:44.185 "data_offset": 2048, 00:08:44.185 "data_size": 63488 00:08:44.185 } 00:08:44.185 ] 00:08:44.185 }' 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.185 11:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.445 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:44.445 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:44.445 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:44.446 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:44.446 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:44.446 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:44.706 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:44.706 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:44.706 11:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.706 11:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.706 [2024-12-06 11:51:42.207601] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.706 11:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.706 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:44.706 "name": "raid_bdev1", 00:08:44.706 "aliases": [ 00:08:44.706 "2c6e92ed-979d-42c6-9adb-3350cfb8931c" 00:08:44.706 ], 00:08:44.706 "product_name": "Raid Volume", 00:08:44.706 "block_size": 512, 00:08:44.706 "num_blocks": 63488, 00:08:44.706 "uuid": "2c6e92ed-979d-42c6-9adb-3350cfb8931c", 00:08:44.706 "assigned_rate_limits": { 00:08:44.706 "rw_ios_per_sec": 0, 00:08:44.706 "rw_mbytes_per_sec": 0, 00:08:44.706 "r_mbytes_per_sec": 0, 00:08:44.706 "w_mbytes_per_sec": 0 00:08:44.706 }, 00:08:44.706 "claimed": false, 00:08:44.706 "zoned": false, 00:08:44.706 "supported_io_types": { 00:08:44.706 "read": true, 00:08:44.706 "write": true, 00:08:44.706 "unmap": false, 00:08:44.706 "flush": false, 00:08:44.706 "reset": true, 00:08:44.706 "nvme_admin": false, 00:08:44.706 "nvme_io": false, 00:08:44.706 "nvme_io_md": false, 00:08:44.706 "write_zeroes": true, 00:08:44.706 "zcopy": false, 00:08:44.706 "get_zone_info": false, 00:08:44.706 "zone_management": false, 00:08:44.706 "zone_append": false, 00:08:44.706 "compare": false, 00:08:44.706 "compare_and_write": false, 00:08:44.706 "abort": false, 00:08:44.706 "seek_hole": false, 00:08:44.706 "seek_data": false, 00:08:44.706 "copy": false, 00:08:44.706 "nvme_iov_md": false 00:08:44.706 }, 00:08:44.706 "memory_domains": [ 00:08:44.706 { 00:08:44.706 "dma_device_id": "system", 00:08:44.706 "dma_device_type": 1 00:08:44.706 }, 00:08:44.706 { 00:08:44.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.706 "dma_device_type": 2 00:08:44.706 }, 00:08:44.706 { 00:08:44.706 "dma_device_id": "system", 00:08:44.706 "dma_device_type": 1 00:08:44.706 }, 00:08:44.706 { 00:08:44.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.706 "dma_device_type": 2 00:08:44.706 }, 00:08:44.706 { 00:08:44.706 "dma_device_id": "system", 00:08:44.706 "dma_device_type": 1 00:08:44.706 }, 00:08:44.706 { 00:08:44.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.706 "dma_device_type": 2 00:08:44.706 } 00:08:44.706 ], 00:08:44.706 "driver_specific": { 00:08:44.706 "raid": { 00:08:44.706 "uuid": "2c6e92ed-979d-42c6-9adb-3350cfb8931c", 00:08:44.706 "strip_size_kb": 0, 00:08:44.706 "state": "online", 00:08:44.706 "raid_level": "raid1", 00:08:44.706 "superblock": true, 00:08:44.706 "num_base_bdevs": 3, 00:08:44.706 "num_base_bdevs_discovered": 3, 00:08:44.706 "num_base_bdevs_operational": 3, 00:08:44.706 "base_bdevs_list": [ 00:08:44.706 { 00:08:44.706 "name": "pt1", 00:08:44.706 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:44.706 "is_configured": true, 00:08:44.706 "data_offset": 2048, 00:08:44.706 "data_size": 63488 00:08:44.706 }, 00:08:44.706 { 00:08:44.706 "name": "pt2", 00:08:44.706 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.706 "is_configured": true, 00:08:44.706 "data_offset": 2048, 00:08:44.706 "data_size": 63488 00:08:44.706 }, 00:08:44.706 { 00:08:44.706 "name": "pt3", 00:08:44.706 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:44.706 "is_configured": true, 00:08:44.706 "data_offset": 2048, 00:08:44.706 "data_size": 63488 00:08:44.706 } 00:08:44.706 ] 00:08:44.706 } 00:08:44.706 } 00:08:44.706 }' 00:08:44.706 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:44.706 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:44.706 pt2 00:08:44.706 pt3' 00:08:44.706 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.706 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:44.706 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.706 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.706 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:44.706 11:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.706 11:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.706 11:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.706 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.706 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.706 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.706 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:44.706 11:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.706 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.706 11:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.706 11:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.706 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.706 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.706 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.706 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:44.707 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.707 11:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.707 11:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.707 11:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.967 [2024-12-06 11:51:42.483564] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2c6e92ed-979d-42c6-9adb-3350cfb8931c '!=' 2c6e92ed-979d-42c6-9adb-3350cfb8931c ']' 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.967 [2024-12-06 11:51:42.531389] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.967 "name": "raid_bdev1", 00:08:44.967 "uuid": "2c6e92ed-979d-42c6-9adb-3350cfb8931c", 00:08:44.967 "strip_size_kb": 0, 00:08:44.967 "state": "online", 00:08:44.967 "raid_level": "raid1", 00:08:44.967 "superblock": true, 00:08:44.967 "num_base_bdevs": 3, 00:08:44.967 "num_base_bdevs_discovered": 2, 00:08:44.967 "num_base_bdevs_operational": 2, 00:08:44.967 "base_bdevs_list": [ 00:08:44.967 { 00:08:44.967 "name": null, 00:08:44.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.967 "is_configured": false, 00:08:44.967 "data_offset": 0, 00:08:44.967 "data_size": 63488 00:08:44.967 }, 00:08:44.967 { 00:08:44.967 "name": "pt2", 00:08:44.967 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.967 "is_configured": true, 00:08:44.967 "data_offset": 2048, 00:08:44.967 "data_size": 63488 00:08:44.967 }, 00:08:44.967 { 00:08:44.967 "name": "pt3", 00:08:44.967 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:44.967 "is_configured": true, 00:08:44.967 "data_offset": 2048, 00:08:44.967 "data_size": 63488 00:08:44.967 } 00:08:44.967 ] 00:08:44.967 }' 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.967 11:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.227 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:45.227 11:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.227 11:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.227 [2024-12-06 11:51:42.979359] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:45.227 [2024-12-06 11:51:42.979424] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:45.227 [2024-12-06 11:51:42.979513] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.227 [2024-12-06 11:51:42.979579] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:45.227 [2024-12-06 11:51:42.979635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:45.488 11:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.488 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:45.488 11:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.488 11:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.488 11:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.488 11:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.488 [2024-12-06 11:51:43.055343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:45.488 [2024-12-06 11:51:43.055441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.488 [2024-12-06 11:51:43.055478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:45.488 [2024-12-06 11:51:43.055504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.488 [2024-12-06 11:51:43.057528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.488 [2024-12-06 11:51:43.057605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:45.488 [2024-12-06 11:51:43.057687] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:45.488 [2024-12-06 11:51:43.057748] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:45.488 pt2 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.488 "name": "raid_bdev1", 00:08:45.488 "uuid": "2c6e92ed-979d-42c6-9adb-3350cfb8931c", 00:08:45.488 "strip_size_kb": 0, 00:08:45.488 "state": "configuring", 00:08:45.488 "raid_level": "raid1", 00:08:45.488 "superblock": true, 00:08:45.488 "num_base_bdevs": 3, 00:08:45.488 "num_base_bdevs_discovered": 1, 00:08:45.488 "num_base_bdevs_operational": 2, 00:08:45.488 "base_bdevs_list": [ 00:08:45.488 { 00:08:45.488 "name": null, 00:08:45.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.488 "is_configured": false, 00:08:45.488 "data_offset": 2048, 00:08:45.488 "data_size": 63488 00:08:45.488 }, 00:08:45.488 { 00:08:45.488 "name": "pt2", 00:08:45.488 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:45.488 "is_configured": true, 00:08:45.488 "data_offset": 2048, 00:08:45.488 "data_size": 63488 00:08:45.488 }, 00:08:45.488 { 00:08:45.488 "name": null, 00:08:45.488 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:45.488 "is_configured": false, 00:08:45.488 "data_offset": 2048, 00:08:45.488 "data_size": 63488 00:08:45.488 } 00:08:45.488 ] 00:08:45.488 }' 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.488 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.748 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:08:45.748 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:45.748 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:08:45.748 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:45.748 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.748 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.748 [2024-12-06 11:51:43.499334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:45.748 [2024-12-06 11:51:43.499433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.748 [2024-12-06 11:51:43.499468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:45.748 [2024-12-06 11:51:43.499494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.748 [2024-12-06 11:51:43.499821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.748 [2024-12-06 11:51:43.499871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:45.748 [2024-12-06 11:51:43.499953] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:45.748 [2024-12-06 11:51:43.500005] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:45.748 [2024-12-06 11:51:43.500118] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:45.748 [2024-12-06 11:51:43.500152] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:45.748 [2024-12-06 11:51:43.500403] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:45.748 [2024-12-06 11:51:43.500551] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:45.748 [2024-12-06 11:51:43.500594] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:45.748 [2024-12-06 11:51:43.500723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.748 pt3 00:08:45.748 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.008 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:46.008 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:46.008 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.008 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.008 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.008 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:46.008 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.008 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.008 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.008 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.008 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.008 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.008 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.008 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.008 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.008 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.008 "name": "raid_bdev1", 00:08:46.008 "uuid": "2c6e92ed-979d-42c6-9adb-3350cfb8931c", 00:08:46.008 "strip_size_kb": 0, 00:08:46.008 "state": "online", 00:08:46.008 "raid_level": "raid1", 00:08:46.008 "superblock": true, 00:08:46.008 "num_base_bdevs": 3, 00:08:46.008 "num_base_bdevs_discovered": 2, 00:08:46.008 "num_base_bdevs_operational": 2, 00:08:46.008 "base_bdevs_list": [ 00:08:46.008 { 00:08:46.008 "name": null, 00:08:46.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.008 "is_configured": false, 00:08:46.008 "data_offset": 2048, 00:08:46.008 "data_size": 63488 00:08:46.008 }, 00:08:46.008 { 00:08:46.008 "name": "pt2", 00:08:46.008 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:46.008 "is_configured": true, 00:08:46.008 "data_offset": 2048, 00:08:46.008 "data_size": 63488 00:08:46.008 }, 00:08:46.008 { 00:08:46.008 "name": "pt3", 00:08:46.008 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:46.008 "is_configured": true, 00:08:46.008 "data_offset": 2048, 00:08:46.008 "data_size": 63488 00:08:46.008 } 00:08:46.008 ] 00:08:46.008 }' 00:08:46.008 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.008 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.268 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:46.268 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.268 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.268 [2024-12-06 11:51:43.947360] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:46.268 [2024-12-06 11:51:43.947419] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.268 [2024-12-06 11:51:43.947495] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.268 [2024-12-06 11:51:43.947561] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.268 [2024-12-06 11:51:43.947595] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:46.268 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.268 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:46.268 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.268 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.268 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.268 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.268 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:46.268 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:46.268 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:08:46.268 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:08:46.268 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:08:46.268 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.268 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.268 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.268 11:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:46.268 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.268 11:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.268 [2024-12-06 11:51:44.003355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:46.268 [2024-12-06 11:51:44.003436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.268 [2024-12-06 11:51:44.003465] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:46.268 [2024-12-06 11:51:44.003492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.268 [2024-12-06 11:51:44.005499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.268 [2024-12-06 11:51:44.005580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:46.268 [2024-12-06 11:51:44.005662] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:46.268 [2024-12-06 11:51:44.005739] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:46.268 [2024-12-06 11:51:44.005856] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:46.268 [2024-12-06 11:51:44.005913] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:46.268 [2024-12-06 11:51:44.005929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:08:46.268 [2024-12-06 11:51:44.005976] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:46.268 pt1 00:08:46.268 11:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.268 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:08:46.268 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:46.268 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:46.268 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.268 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.268 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.268 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:46.268 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.268 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.268 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.268 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.268 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.268 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.268 11:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.268 11:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.529 11:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.529 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.529 "name": "raid_bdev1", 00:08:46.529 "uuid": "2c6e92ed-979d-42c6-9adb-3350cfb8931c", 00:08:46.529 "strip_size_kb": 0, 00:08:46.529 "state": "configuring", 00:08:46.529 "raid_level": "raid1", 00:08:46.529 "superblock": true, 00:08:46.529 "num_base_bdevs": 3, 00:08:46.529 "num_base_bdevs_discovered": 1, 00:08:46.529 "num_base_bdevs_operational": 2, 00:08:46.529 "base_bdevs_list": [ 00:08:46.529 { 00:08:46.529 "name": null, 00:08:46.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.529 "is_configured": false, 00:08:46.529 "data_offset": 2048, 00:08:46.529 "data_size": 63488 00:08:46.529 }, 00:08:46.529 { 00:08:46.529 "name": "pt2", 00:08:46.529 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:46.529 "is_configured": true, 00:08:46.529 "data_offset": 2048, 00:08:46.529 "data_size": 63488 00:08:46.529 }, 00:08:46.529 { 00:08:46.529 "name": null, 00:08:46.529 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:46.529 "is_configured": false, 00:08:46.529 "data_offset": 2048, 00:08:46.529 "data_size": 63488 00:08:46.529 } 00:08:46.529 ] 00:08:46.529 }' 00:08:46.529 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.529 11:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.789 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:08:46.789 11:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.789 11:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.789 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:46.789 11:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.789 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:08:46.789 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:46.789 11:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.789 11:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.789 [2024-12-06 11:51:44.503336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:46.789 [2024-12-06 11:51:44.503424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.789 [2024-12-06 11:51:44.503457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:08:46.789 [2024-12-06 11:51:44.503484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.789 [2024-12-06 11:51:44.503825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.789 [2024-12-06 11:51:44.503883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:46.789 [2024-12-06 11:51:44.503970] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:46.790 [2024-12-06 11:51:44.504019] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:46.790 [2024-12-06 11:51:44.504118] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:08:46.790 [2024-12-06 11:51:44.504155] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:46.790 [2024-12-06 11:51:44.504384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:08:46.790 [2024-12-06 11:51:44.504536] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:08:46.790 [2024-12-06 11:51:44.504574] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:08:46.790 [2024-12-06 11:51:44.504704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.790 pt3 00:08:46.790 11:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.790 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:46.790 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:46.790 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.790 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.790 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.790 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:46.790 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.790 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.790 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.790 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.790 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.790 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.790 11:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.790 11:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.790 11:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.790 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.790 "name": "raid_bdev1", 00:08:46.790 "uuid": "2c6e92ed-979d-42c6-9adb-3350cfb8931c", 00:08:46.790 "strip_size_kb": 0, 00:08:46.790 "state": "online", 00:08:46.790 "raid_level": "raid1", 00:08:46.790 "superblock": true, 00:08:46.790 "num_base_bdevs": 3, 00:08:46.790 "num_base_bdevs_discovered": 2, 00:08:46.790 "num_base_bdevs_operational": 2, 00:08:46.790 "base_bdevs_list": [ 00:08:46.790 { 00:08:46.790 "name": null, 00:08:46.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.790 "is_configured": false, 00:08:46.790 "data_offset": 2048, 00:08:46.790 "data_size": 63488 00:08:46.790 }, 00:08:46.790 { 00:08:46.790 "name": "pt2", 00:08:46.790 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:46.790 "is_configured": true, 00:08:46.790 "data_offset": 2048, 00:08:46.790 "data_size": 63488 00:08:46.790 }, 00:08:46.790 { 00:08:46.790 "name": "pt3", 00:08:46.790 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:46.790 "is_configured": true, 00:08:46.790 "data_offset": 2048, 00:08:46.790 "data_size": 63488 00:08:46.790 } 00:08:46.790 ] 00:08:46.790 }' 00:08:46.790 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.790 11:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.360 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:47.360 11:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:47.360 11:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.360 11:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.360 11:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.360 11:51:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:47.360 11:51:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:47.360 11:51:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:47.360 11:51:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.360 11:51:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.360 [2024-12-06 11:51:45.031364] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:47.360 11:51:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.360 11:51:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 2c6e92ed-979d-42c6-9adb-3350cfb8931c '!=' 2c6e92ed-979d-42c6-9adb-3350cfb8931c ']' 00:08:47.360 11:51:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79341 00:08:47.360 11:51:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 79341 ']' 00:08:47.360 11:51:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 79341 00:08:47.360 11:51:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:47.360 11:51:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:47.360 11:51:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79341 00:08:47.360 killing process with pid 79341 00:08:47.360 11:51:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:47.360 11:51:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:47.360 11:51:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79341' 00:08:47.360 11:51:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 79341 00:08:47.360 [2024-12-06 11:51:45.099386] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:47.360 [2024-12-06 11:51:45.099455] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.360 [2024-12-06 11:51:45.099507] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:47.360 [2024-12-06 11:51:45.099515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:08:47.360 11:51:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 79341 00:08:47.619 [2024-12-06 11:51:45.133041] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:47.619 ************************************ 00:08:47.619 END TEST raid_superblock_test 00:08:47.619 ************************************ 00:08:47.619 11:51:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:47.619 00:08:47.619 real 0m6.578s 00:08:47.619 user 0m11.104s 00:08:47.619 sys 0m1.324s 00:08:47.619 11:51:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.619 11:51:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.877 11:51:45 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:08:47.877 11:51:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:47.877 11:51:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.877 11:51:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:47.877 ************************************ 00:08:47.877 START TEST raid_read_error_test 00:08:47.877 ************************************ 00:08:47.877 11:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:08:47.877 11:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:47.877 11:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:47.877 11:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:47.877 11:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:47.877 11:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.877 11:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:47.877 11:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.878 11:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.878 11:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:47.878 11:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.878 11:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.878 11:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:47.878 11:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.878 11:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.878 11:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:47.878 11:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:47.878 11:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:47.878 11:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:47.878 11:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:47.878 11:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:47.878 11:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:47.878 11:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:47.878 11:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:47.878 11:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:47.878 11:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qZagrOpUDb 00:08:47.878 11:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=79770 00:08:47.878 11:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:47.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.878 11:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 79770 00:08:47.878 11:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 79770 ']' 00:08:47.878 11:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.878 11:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:47.878 11:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.878 11:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:47.878 11:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.878 [2024-12-06 11:51:45.528029] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:47.878 [2024-12-06 11:51:45.528135] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79770 ] 00:08:48.137 [2024-12-06 11:51:45.671147] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.137 [2024-12-06 11:51:45.714624] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.137 [2024-12-06 11:51:45.755893] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.137 [2024-12-06 11:51:45.756009] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.706 BaseBdev1_malloc 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.706 true 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.706 [2024-12-06 11:51:46.377157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:48.706 [2024-12-06 11:51:46.377283] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.706 [2024-12-06 11:51:46.377323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:48.706 [2024-12-06 11:51:46.377361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.706 [2024-12-06 11:51:46.379462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.706 [2024-12-06 11:51:46.379531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:48.706 BaseBdev1 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.706 BaseBdev2_malloc 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.706 true 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.706 [2024-12-06 11:51:46.427372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:48.706 [2024-12-06 11:51:46.427423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.706 [2024-12-06 11:51:46.427443] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:48.706 [2024-12-06 11:51:46.427451] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.706 [2024-12-06 11:51:46.429466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.706 [2024-12-06 11:51:46.429498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:48.706 BaseBdev2 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.706 BaseBdev3_malloc 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.706 true 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:48.706 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.966 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.966 [2024-12-06 11:51:46.467743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:48.966 [2024-12-06 11:51:46.467840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.966 [2024-12-06 11:51:46.467874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:48.966 [2024-12-06 11:51:46.467900] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.966 [2024-12-06 11:51:46.469891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.966 [2024-12-06 11:51:46.469954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:48.966 BaseBdev3 00:08:48.966 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.966 11:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:48.966 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.966 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.966 [2024-12-06 11:51:46.479811] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.966 [2024-12-06 11:51:46.481615] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.966 [2024-12-06 11:51:46.481679] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:48.966 [2024-12-06 11:51:46.481832] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:48.966 [2024-12-06 11:51:46.481845] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:48.966 [2024-12-06 11:51:46.482053] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:48.966 [2024-12-06 11:51:46.482191] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:48.966 [2024-12-06 11:51:46.482200] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:48.966 [2024-12-06 11:51:46.482320] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.966 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.966 11:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:48.966 11:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:48.966 11:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.966 11:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:48.966 11:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:48.966 11:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.966 11:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.966 11:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.966 11:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.966 11:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.966 11:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.966 11:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.966 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.966 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.966 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.966 11:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.966 "name": "raid_bdev1", 00:08:48.966 "uuid": "d3512d59-148f-46bf-9b8a-8b0491268313", 00:08:48.966 "strip_size_kb": 0, 00:08:48.966 "state": "online", 00:08:48.966 "raid_level": "raid1", 00:08:48.966 "superblock": true, 00:08:48.966 "num_base_bdevs": 3, 00:08:48.966 "num_base_bdevs_discovered": 3, 00:08:48.966 "num_base_bdevs_operational": 3, 00:08:48.966 "base_bdevs_list": [ 00:08:48.966 { 00:08:48.966 "name": "BaseBdev1", 00:08:48.966 "uuid": "c37332cf-b057-551b-ad52-73be3cfb0f94", 00:08:48.966 "is_configured": true, 00:08:48.966 "data_offset": 2048, 00:08:48.966 "data_size": 63488 00:08:48.966 }, 00:08:48.966 { 00:08:48.966 "name": "BaseBdev2", 00:08:48.966 "uuid": "e38f0925-3ee5-5cb4-b20f-ba33db7b5e4e", 00:08:48.966 "is_configured": true, 00:08:48.966 "data_offset": 2048, 00:08:48.966 "data_size": 63488 00:08:48.966 }, 00:08:48.966 { 00:08:48.966 "name": "BaseBdev3", 00:08:48.966 "uuid": "0c98d251-b034-5efa-9fb6-01f7a6d2e164", 00:08:48.966 "is_configured": true, 00:08:48.966 "data_offset": 2048, 00:08:48.966 "data_size": 63488 00:08:48.966 } 00:08:48.966 ] 00:08:48.966 }' 00:08:48.966 11:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.966 11:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.227 11:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:49.227 11:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:49.487 [2024-12-06 11:51:47.003262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:50.450 11:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:50.450 11:51:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.450 11:51:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.450 11:51:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.450 11:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:50.450 11:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:50.450 11:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:50.450 11:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:50.450 11:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:50.450 11:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:50.450 11:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.450 11:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:50.450 11:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:50.450 11:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.450 11:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.450 11:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.450 11:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.450 11:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.450 11:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.450 11:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:50.450 11:51:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.450 11:51:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.450 11:51:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.450 11:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.450 "name": "raid_bdev1", 00:08:50.450 "uuid": "d3512d59-148f-46bf-9b8a-8b0491268313", 00:08:50.450 "strip_size_kb": 0, 00:08:50.450 "state": "online", 00:08:50.450 "raid_level": "raid1", 00:08:50.450 "superblock": true, 00:08:50.450 "num_base_bdevs": 3, 00:08:50.450 "num_base_bdevs_discovered": 3, 00:08:50.450 "num_base_bdevs_operational": 3, 00:08:50.450 "base_bdevs_list": [ 00:08:50.450 { 00:08:50.450 "name": "BaseBdev1", 00:08:50.450 "uuid": "c37332cf-b057-551b-ad52-73be3cfb0f94", 00:08:50.450 "is_configured": true, 00:08:50.450 "data_offset": 2048, 00:08:50.450 "data_size": 63488 00:08:50.450 }, 00:08:50.450 { 00:08:50.450 "name": "BaseBdev2", 00:08:50.450 "uuid": "e38f0925-3ee5-5cb4-b20f-ba33db7b5e4e", 00:08:50.450 "is_configured": true, 00:08:50.450 "data_offset": 2048, 00:08:50.450 "data_size": 63488 00:08:50.450 }, 00:08:50.450 { 00:08:50.450 "name": "BaseBdev3", 00:08:50.450 "uuid": "0c98d251-b034-5efa-9fb6-01f7a6d2e164", 00:08:50.450 "is_configured": true, 00:08:50.450 "data_offset": 2048, 00:08:50.450 "data_size": 63488 00:08:50.450 } 00:08:50.450 ] 00:08:50.450 }' 00:08:50.450 11:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.450 11:51:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.709 11:51:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:50.709 11:51:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.709 11:51:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.709 [2024-12-06 11:51:48.413665] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:50.709 [2024-12-06 11:51:48.413756] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:50.709 [2024-12-06 11:51:48.416265] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.709 [2024-12-06 11:51:48.416366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.709 [2024-12-06 11:51:48.416512] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:50.709 [2024-12-06 11:51:48.416558] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:50.709 { 00:08:50.709 "results": [ 00:08:50.709 { 00:08:50.709 "job": "raid_bdev1", 00:08:50.709 "core_mask": "0x1", 00:08:50.709 "workload": "randrw", 00:08:50.709 "percentage": 50, 00:08:50.709 "status": "finished", 00:08:50.709 "queue_depth": 1, 00:08:50.709 "io_size": 131072, 00:08:50.709 "runtime": 1.411419, 00:08:50.709 "iops": 15212.350124236673, 00:08:50.709 "mibps": 1901.543765529584, 00:08:50.709 "io_failed": 0, 00:08:50.709 "io_timeout": 0, 00:08:50.709 "avg_latency_us": 63.3052292937422, 00:08:50.709 "min_latency_us": 21.687336244541484, 00:08:50.709 "max_latency_us": 1430.9170305676855 00:08:50.709 } 00:08:50.709 ], 00:08:50.710 "core_count": 1 00:08:50.710 } 00:08:50.710 11:51:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.710 11:51:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 79770 00:08:50.710 11:51:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 79770 ']' 00:08:50.710 11:51:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 79770 00:08:50.710 11:51:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:50.710 11:51:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:50.710 11:51:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79770 00:08:50.710 killing process with pid 79770 00:08:50.710 11:51:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:50.710 11:51:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:50.710 11:51:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79770' 00:08:50.710 11:51:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 79770 00:08:50.710 [2024-12-06 11:51:48.462969] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:50.710 11:51:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 79770 00:08:50.969 [2024-12-06 11:51:48.489147] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:50.969 11:51:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qZagrOpUDb 00:08:50.969 11:51:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:50.969 11:51:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:51.229 11:51:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:51.229 11:51:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:51.229 ************************************ 00:08:51.229 END TEST raid_read_error_test 00:08:51.229 ************************************ 00:08:51.229 11:51:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:51.229 11:51:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:51.229 11:51:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:51.229 00:08:51.229 real 0m3.296s 00:08:51.229 user 0m4.168s 00:08:51.229 sys 0m0.517s 00:08:51.229 11:51:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.229 11:51:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.229 11:51:48 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:08:51.229 11:51:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:51.229 11:51:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.229 11:51:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:51.229 ************************************ 00:08:51.229 START TEST raid_write_error_test 00:08:51.229 ************************************ 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.AbFU3izOit 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=79899 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 79899 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 79899 ']' 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:51.229 11:51:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.229 [2024-12-06 11:51:48.903654] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:51.229 [2024-12-06 11:51:48.903762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79899 ] 00:08:51.489 [2024-12-06 11:51:49.047981] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.489 [2024-12-06 11:51:49.092798] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.489 [2024-12-06 11:51:49.135096] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.489 [2024-12-06 11:51:49.135208] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.060 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:52.060 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:52.060 11:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:52.060 11:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:52.060 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.060 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.060 BaseBdev1_malloc 00:08:52.060 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.060 11:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:52.060 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.060 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.060 true 00:08:52.060 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.060 11:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:52.060 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.060 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.060 [2024-12-06 11:51:49.752501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:52.060 [2024-12-06 11:51:49.752617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.060 [2024-12-06 11:51:49.752656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:52.060 [2024-12-06 11:51:49.752684] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.060 [2024-12-06 11:51:49.754722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.060 [2024-12-06 11:51:49.754786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:52.060 BaseBdev1 00:08:52.060 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.060 11:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:52.060 11:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:52.060 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.060 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.060 BaseBdev2_malloc 00:08:52.060 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.060 11:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:52.060 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.061 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.061 true 00:08:52.061 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.061 11:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:52.061 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.061 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.061 [2024-12-06 11:51:49.808593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:52.061 [2024-12-06 11:51:49.808735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.061 [2024-12-06 11:51:49.808799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:52.061 [2024-12-06 11:51:49.808853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.061 [2024-12-06 11:51:49.812043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.061 [2024-12-06 11:51:49.812122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:52.061 BaseBdev2 00:08:52.323 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.323 11:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:52.323 11:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:52.323 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.323 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.323 BaseBdev3_malloc 00:08:52.323 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.323 11:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:52.323 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.323 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.323 true 00:08:52.323 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.323 11:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:52.323 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.323 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.323 [2024-12-06 11:51:49.849123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:52.323 [2024-12-06 11:51:49.849217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.323 [2024-12-06 11:51:49.849264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:52.323 [2024-12-06 11:51:49.849295] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.323 [2024-12-06 11:51:49.851299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.324 [2024-12-06 11:51:49.851376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:52.324 BaseBdev3 00:08:52.324 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.324 11:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:52.324 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.324 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.324 [2024-12-06 11:51:49.861186] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:52.324 [2024-12-06 11:51:49.863032] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:52.324 [2024-12-06 11:51:49.863154] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:52.324 [2024-12-06 11:51:49.863356] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:52.324 [2024-12-06 11:51:49.863402] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:52.324 [2024-12-06 11:51:49.863665] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:52.324 [2024-12-06 11:51:49.863833] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:52.324 [2024-12-06 11:51:49.863874] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:52.324 [2024-12-06 11:51:49.864048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.324 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.324 11:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:52.324 11:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.324 11:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:52.324 11:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:52.324 11:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:52.324 11:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.324 11:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.324 11:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.324 11:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.324 11:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.324 11:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.324 11:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.324 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.324 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.324 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.324 11:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.324 "name": "raid_bdev1", 00:08:52.324 "uuid": "c34921bf-67af-4d25-9988-c6fe5886fc9f", 00:08:52.324 "strip_size_kb": 0, 00:08:52.324 "state": "online", 00:08:52.324 "raid_level": "raid1", 00:08:52.324 "superblock": true, 00:08:52.324 "num_base_bdevs": 3, 00:08:52.324 "num_base_bdevs_discovered": 3, 00:08:52.324 "num_base_bdevs_operational": 3, 00:08:52.324 "base_bdevs_list": [ 00:08:52.324 { 00:08:52.324 "name": "BaseBdev1", 00:08:52.324 "uuid": "745b3e29-e89e-564b-b88c-b4b2fd29b650", 00:08:52.324 "is_configured": true, 00:08:52.324 "data_offset": 2048, 00:08:52.324 "data_size": 63488 00:08:52.324 }, 00:08:52.324 { 00:08:52.324 "name": "BaseBdev2", 00:08:52.324 "uuid": "01f2f1ef-87a7-5634-b145-f33105c5bf58", 00:08:52.324 "is_configured": true, 00:08:52.324 "data_offset": 2048, 00:08:52.324 "data_size": 63488 00:08:52.324 }, 00:08:52.324 { 00:08:52.324 "name": "BaseBdev3", 00:08:52.324 "uuid": "0ab8f6bd-022d-5372-925e-a1c734ab6946", 00:08:52.324 "is_configured": true, 00:08:52.324 "data_offset": 2048, 00:08:52.324 "data_size": 63488 00:08:52.324 } 00:08:52.324 ] 00:08:52.324 }' 00:08:52.324 11:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.324 11:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.583 11:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:52.583 11:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:52.842 [2024-12-06 11:51:50.416643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:53.777 11:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:53.777 11:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.777 11:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.778 [2024-12-06 11:51:51.328416] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:53.778 [2024-12-06 11:51:51.328541] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:53.778 [2024-12-06 11:51:51.328794] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002600 00:08:53.778 11:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.778 11:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:53.778 11:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:53.778 11:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:53.778 11:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:08:53.778 11:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:53.778 11:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:53.778 11:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.778 11:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:53.778 11:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:53.778 11:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:53.778 11:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.778 11:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.778 11:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.778 11:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.778 11:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.778 11:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.778 11:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.778 11:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.778 11:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.778 11:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.778 "name": "raid_bdev1", 00:08:53.778 "uuid": "c34921bf-67af-4d25-9988-c6fe5886fc9f", 00:08:53.778 "strip_size_kb": 0, 00:08:53.778 "state": "online", 00:08:53.778 "raid_level": "raid1", 00:08:53.778 "superblock": true, 00:08:53.778 "num_base_bdevs": 3, 00:08:53.778 "num_base_bdevs_discovered": 2, 00:08:53.778 "num_base_bdevs_operational": 2, 00:08:53.778 "base_bdevs_list": [ 00:08:53.778 { 00:08:53.778 "name": null, 00:08:53.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.778 "is_configured": false, 00:08:53.778 "data_offset": 0, 00:08:53.778 "data_size": 63488 00:08:53.778 }, 00:08:53.778 { 00:08:53.778 "name": "BaseBdev2", 00:08:53.778 "uuid": "01f2f1ef-87a7-5634-b145-f33105c5bf58", 00:08:53.778 "is_configured": true, 00:08:53.778 "data_offset": 2048, 00:08:53.778 "data_size": 63488 00:08:53.778 }, 00:08:53.778 { 00:08:53.778 "name": "BaseBdev3", 00:08:53.778 "uuid": "0ab8f6bd-022d-5372-925e-a1c734ab6946", 00:08:53.778 "is_configured": true, 00:08:53.778 "data_offset": 2048, 00:08:53.778 "data_size": 63488 00:08:53.778 } 00:08:53.778 ] 00:08:53.778 }' 00:08:53.778 11:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.778 11:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.037 11:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:54.037 11:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.037 11:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.037 [2024-12-06 11:51:51.754369] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:54.037 [2024-12-06 11:51:51.754446] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:54.037 [2024-12-06 11:51:51.756984] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.037 [2024-12-06 11:51:51.757067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.037 [2024-12-06 11:51:51.757163] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:54.037 [2024-12-06 11:51:51.757227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:54.037 { 00:08:54.037 "results": [ 00:08:54.037 { 00:08:54.037 "job": "raid_bdev1", 00:08:54.037 "core_mask": "0x1", 00:08:54.037 "workload": "randrw", 00:08:54.037 "percentage": 50, 00:08:54.037 "status": "finished", 00:08:54.037 "queue_depth": 1, 00:08:54.037 "io_size": 131072, 00:08:54.037 "runtime": 1.338562, 00:08:54.037 "iops": 16859.88396503113, 00:08:54.037 "mibps": 2107.4854956288914, 00:08:54.037 "io_failed": 0, 00:08:54.037 "io_timeout": 0, 00:08:54.037 "avg_latency_us": 56.810455581888185, 00:08:54.037 "min_latency_us": 21.463755458515283, 00:08:54.037 "max_latency_us": 1373.6803493449781 00:08:54.037 } 00:08:54.037 ], 00:08:54.037 "core_count": 1 00:08:54.037 } 00:08:54.037 11:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.037 11:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 79899 00:08:54.037 11:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 79899 ']' 00:08:54.037 11:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 79899 00:08:54.037 11:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:54.037 11:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:54.037 11:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79899 00:08:54.296 11:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:54.296 11:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:54.296 killing process with pid 79899 00:08:54.296 11:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79899' 00:08:54.296 11:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 79899 00:08:54.296 [2024-12-06 11:51:51.796792] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:54.296 11:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 79899 00:08:54.296 [2024-12-06 11:51:51.822366] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:54.556 11:51:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.AbFU3izOit 00:08:54.556 11:51:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:54.556 11:51:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:54.556 11:51:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:54.556 11:51:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:54.556 ************************************ 00:08:54.556 END TEST raid_write_error_test 00:08:54.556 ************************************ 00:08:54.556 11:51:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:54.556 11:51:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:54.556 11:51:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:54.556 00:08:54.556 real 0m3.261s 00:08:54.556 user 0m4.131s 00:08:54.556 sys 0m0.493s 00:08:54.556 11:51:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.556 11:51:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.556 11:51:52 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:54.556 11:51:52 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:54.556 11:51:52 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:08:54.556 11:51:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:54.556 11:51:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.556 11:51:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:54.556 ************************************ 00:08:54.556 START TEST raid_state_function_test 00:08:54.556 ************************************ 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80032 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80032' 00:08:54.556 Process raid pid: 80032 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80032 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 80032 ']' 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.556 11:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.556 [2024-12-06 11:51:52.229622] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:54.556 [2024-12-06 11:51:52.229825] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.815 [2024-12-06 11:51:52.374902] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.815 [2024-12-06 11:51:52.418193] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.815 [2024-12-06 11:51:52.459660] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.815 [2024-12-06 11:51:52.459692] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.384 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:55.384 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:55.384 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:55.384 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.384 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.384 [2024-12-06 11:51:53.052515] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:55.384 [2024-12-06 11:51:53.052605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:55.384 [2024-12-06 11:51:53.052649] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.384 [2024-12-06 11:51:53.052673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.384 [2024-12-06 11:51:53.052690] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:55.384 [2024-12-06 11:51:53.052712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:55.384 [2024-12-06 11:51:53.052729] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:55.384 [2024-12-06 11:51:53.052748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:55.384 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.384 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:55.384 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.384 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.384 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.384 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.384 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:55.384 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.384 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.384 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.384 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.384 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.384 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.384 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.384 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.384 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.384 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.384 "name": "Existed_Raid", 00:08:55.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.384 "strip_size_kb": 64, 00:08:55.384 "state": "configuring", 00:08:55.384 "raid_level": "raid0", 00:08:55.384 "superblock": false, 00:08:55.384 "num_base_bdevs": 4, 00:08:55.384 "num_base_bdevs_discovered": 0, 00:08:55.384 "num_base_bdevs_operational": 4, 00:08:55.384 "base_bdevs_list": [ 00:08:55.384 { 00:08:55.384 "name": "BaseBdev1", 00:08:55.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.384 "is_configured": false, 00:08:55.384 "data_offset": 0, 00:08:55.384 "data_size": 0 00:08:55.384 }, 00:08:55.384 { 00:08:55.384 "name": "BaseBdev2", 00:08:55.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.384 "is_configured": false, 00:08:55.384 "data_offset": 0, 00:08:55.384 "data_size": 0 00:08:55.384 }, 00:08:55.384 { 00:08:55.384 "name": "BaseBdev3", 00:08:55.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.384 "is_configured": false, 00:08:55.384 "data_offset": 0, 00:08:55.384 "data_size": 0 00:08:55.384 }, 00:08:55.384 { 00:08:55.384 "name": "BaseBdev4", 00:08:55.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.384 "is_configured": false, 00:08:55.384 "data_offset": 0, 00:08:55.384 "data_size": 0 00:08:55.384 } 00:08:55.384 ] 00:08:55.384 }' 00:08:55.384 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.384 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.954 [2024-12-06 11:51:53.507633] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:55.954 [2024-12-06 11:51:53.507709] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.954 [2024-12-06 11:51:53.515633] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:55.954 [2024-12-06 11:51:53.515671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:55.954 [2024-12-06 11:51:53.515679] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.954 [2024-12-06 11:51:53.515687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.954 [2024-12-06 11:51:53.515693] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:55.954 [2024-12-06 11:51:53.515701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:55.954 [2024-12-06 11:51:53.515707] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:55.954 [2024-12-06 11:51:53.515715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.954 [2024-12-06 11:51:53.532441] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:55.954 BaseBdev1 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.954 [ 00:08:55.954 { 00:08:55.954 "name": "BaseBdev1", 00:08:55.954 "aliases": [ 00:08:55.954 "1a937ee2-a2a0-4a0b-b5b2-9d8c9b89d27a" 00:08:55.954 ], 00:08:55.954 "product_name": "Malloc disk", 00:08:55.954 "block_size": 512, 00:08:55.954 "num_blocks": 65536, 00:08:55.954 "uuid": "1a937ee2-a2a0-4a0b-b5b2-9d8c9b89d27a", 00:08:55.954 "assigned_rate_limits": { 00:08:55.954 "rw_ios_per_sec": 0, 00:08:55.954 "rw_mbytes_per_sec": 0, 00:08:55.954 "r_mbytes_per_sec": 0, 00:08:55.954 "w_mbytes_per_sec": 0 00:08:55.954 }, 00:08:55.954 "claimed": true, 00:08:55.954 "claim_type": "exclusive_write", 00:08:55.954 "zoned": false, 00:08:55.954 "supported_io_types": { 00:08:55.954 "read": true, 00:08:55.954 "write": true, 00:08:55.954 "unmap": true, 00:08:55.954 "flush": true, 00:08:55.954 "reset": true, 00:08:55.954 "nvme_admin": false, 00:08:55.954 "nvme_io": false, 00:08:55.954 "nvme_io_md": false, 00:08:55.954 "write_zeroes": true, 00:08:55.954 "zcopy": true, 00:08:55.954 "get_zone_info": false, 00:08:55.954 "zone_management": false, 00:08:55.954 "zone_append": false, 00:08:55.954 "compare": false, 00:08:55.954 "compare_and_write": false, 00:08:55.954 "abort": true, 00:08:55.954 "seek_hole": false, 00:08:55.954 "seek_data": false, 00:08:55.954 "copy": true, 00:08:55.954 "nvme_iov_md": false 00:08:55.954 }, 00:08:55.954 "memory_domains": [ 00:08:55.954 { 00:08:55.954 "dma_device_id": "system", 00:08:55.954 "dma_device_type": 1 00:08:55.954 }, 00:08:55.954 { 00:08:55.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.954 "dma_device_type": 2 00:08:55.954 } 00:08:55.954 ], 00:08:55.954 "driver_specific": {} 00:08:55.954 } 00:08:55.954 ] 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.954 "name": "Existed_Raid", 00:08:55.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.954 "strip_size_kb": 64, 00:08:55.954 "state": "configuring", 00:08:55.954 "raid_level": "raid0", 00:08:55.954 "superblock": false, 00:08:55.954 "num_base_bdevs": 4, 00:08:55.954 "num_base_bdevs_discovered": 1, 00:08:55.954 "num_base_bdevs_operational": 4, 00:08:55.954 "base_bdevs_list": [ 00:08:55.954 { 00:08:55.954 "name": "BaseBdev1", 00:08:55.954 "uuid": "1a937ee2-a2a0-4a0b-b5b2-9d8c9b89d27a", 00:08:55.954 "is_configured": true, 00:08:55.954 "data_offset": 0, 00:08:55.954 "data_size": 65536 00:08:55.954 }, 00:08:55.954 { 00:08:55.954 "name": "BaseBdev2", 00:08:55.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.954 "is_configured": false, 00:08:55.954 "data_offset": 0, 00:08:55.954 "data_size": 0 00:08:55.954 }, 00:08:55.954 { 00:08:55.954 "name": "BaseBdev3", 00:08:55.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.954 "is_configured": false, 00:08:55.954 "data_offset": 0, 00:08:55.954 "data_size": 0 00:08:55.954 }, 00:08:55.954 { 00:08:55.954 "name": "BaseBdev4", 00:08:55.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.954 "is_configured": false, 00:08:55.954 "data_offset": 0, 00:08:55.954 "data_size": 0 00:08:55.954 } 00:08:55.954 ] 00:08:55.954 }' 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.954 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.524 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:56.524 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.524 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.524 [2024-12-06 11:51:53.995683] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:56.524 [2024-12-06 11:51:53.995732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:56.524 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.524 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:56.524 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.524 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.524 [2024-12-06 11:51:54.007713] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:56.524 [2024-12-06 11:51:54.009525] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:56.524 [2024-12-06 11:51:54.009609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:56.524 [2024-12-06 11:51:54.009636] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:56.524 [2024-12-06 11:51:54.009648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:56.524 [2024-12-06 11:51:54.009654] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:56.524 [2024-12-06 11:51:54.009662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:56.524 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.524 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:56.524 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:56.524 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:56.524 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.524 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.524 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.524 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.524 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:56.524 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.524 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.524 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.524 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.524 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.524 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.524 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.524 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.524 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.524 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.524 "name": "Existed_Raid", 00:08:56.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.524 "strip_size_kb": 64, 00:08:56.524 "state": "configuring", 00:08:56.524 "raid_level": "raid0", 00:08:56.524 "superblock": false, 00:08:56.524 "num_base_bdevs": 4, 00:08:56.524 "num_base_bdevs_discovered": 1, 00:08:56.524 "num_base_bdevs_operational": 4, 00:08:56.524 "base_bdevs_list": [ 00:08:56.524 { 00:08:56.524 "name": "BaseBdev1", 00:08:56.524 "uuid": "1a937ee2-a2a0-4a0b-b5b2-9d8c9b89d27a", 00:08:56.524 "is_configured": true, 00:08:56.524 "data_offset": 0, 00:08:56.524 "data_size": 65536 00:08:56.524 }, 00:08:56.524 { 00:08:56.524 "name": "BaseBdev2", 00:08:56.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.524 "is_configured": false, 00:08:56.524 "data_offset": 0, 00:08:56.524 "data_size": 0 00:08:56.524 }, 00:08:56.524 { 00:08:56.524 "name": "BaseBdev3", 00:08:56.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.524 "is_configured": false, 00:08:56.524 "data_offset": 0, 00:08:56.524 "data_size": 0 00:08:56.524 }, 00:08:56.524 { 00:08:56.524 "name": "BaseBdev4", 00:08:56.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.524 "is_configured": false, 00:08:56.524 "data_offset": 0, 00:08:56.524 "data_size": 0 00:08:56.524 } 00:08:56.524 ] 00:08:56.524 }' 00:08:56.524 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.524 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.784 [2024-12-06 11:51:54.458266] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:56.784 BaseBdev2 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.784 [ 00:08:56.784 { 00:08:56.784 "name": "BaseBdev2", 00:08:56.784 "aliases": [ 00:08:56.784 "d1f29516-4708-4234-9676-9930cdc4576e" 00:08:56.784 ], 00:08:56.784 "product_name": "Malloc disk", 00:08:56.784 "block_size": 512, 00:08:56.784 "num_blocks": 65536, 00:08:56.784 "uuid": "d1f29516-4708-4234-9676-9930cdc4576e", 00:08:56.784 "assigned_rate_limits": { 00:08:56.784 "rw_ios_per_sec": 0, 00:08:56.784 "rw_mbytes_per_sec": 0, 00:08:56.784 "r_mbytes_per_sec": 0, 00:08:56.784 "w_mbytes_per_sec": 0 00:08:56.784 }, 00:08:56.784 "claimed": true, 00:08:56.784 "claim_type": "exclusive_write", 00:08:56.784 "zoned": false, 00:08:56.784 "supported_io_types": { 00:08:56.784 "read": true, 00:08:56.784 "write": true, 00:08:56.784 "unmap": true, 00:08:56.784 "flush": true, 00:08:56.784 "reset": true, 00:08:56.784 "nvme_admin": false, 00:08:56.784 "nvme_io": false, 00:08:56.784 "nvme_io_md": false, 00:08:56.784 "write_zeroes": true, 00:08:56.784 "zcopy": true, 00:08:56.784 "get_zone_info": false, 00:08:56.784 "zone_management": false, 00:08:56.784 "zone_append": false, 00:08:56.784 "compare": false, 00:08:56.784 "compare_and_write": false, 00:08:56.784 "abort": true, 00:08:56.784 "seek_hole": false, 00:08:56.784 "seek_data": false, 00:08:56.784 "copy": true, 00:08:56.784 "nvme_iov_md": false 00:08:56.784 }, 00:08:56.784 "memory_domains": [ 00:08:56.784 { 00:08:56.784 "dma_device_id": "system", 00:08:56.784 "dma_device_type": 1 00:08:56.784 }, 00:08:56.784 { 00:08:56.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.784 "dma_device_type": 2 00:08:56.784 } 00:08:56.784 ], 00:08:56.784 "driver_specific": {} 00:08:56.784 } 00:08:56.784 ] 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.784 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.785 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.785 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.785 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.785 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.045 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.045 "name": "Existed_Raid", 00:08:57.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.045 "strip_size_kb": 64, 00:08:57.045 "state": "configuring", 00:08:57.045 "raid_level": "raid0", 00:08:57.045 "superblock": false, 00:08:57.045 "num_base_bdevs": 4, 00:08:57.045 "num_base_bdevs_discovered": 2, 00:08:57.045 "num_base_bdevs_operational": 4, 00:08:57.045 "base_bdevs_list": [ 00:08:57.045 { 00:08:57.045 "name": "BaseBdev1", 00:08:57.045 "uuid": "1a937ee2-a2a0-4a0b-b5b2-9d8c9b89d27a", 00:08:57.045 "is_configured": true, 00:08:57.045 "data_offset": 0, 00:08:57.045 "data_size": 65536 00:08:57.045 }, 00:08:57.045 { 00:08:57.045 "name": "BaseBdev2", 00:08:57.045 "uuid": "d1f29516-4708-4234-9676-9930cdc4576e", 00:08:57.045 "is_configured": true, 00:08:57.045 "data_offset": 0, 00:08:57.045 "data_size": 65536 00:08:57.045 }, 00:08:57.045 { 00:08:57.045 "name": "BaseBdev3", 00:08:57.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.045 "is_configured": false, 00:08:57.045 "data_offset": 0, 00:08:57.045 "data_size": 0 00:08:57.045 }, 00:08:57.045 { 00:08:57.045 "name": "BaseBdev4", 00:08:57.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.045 "is_configured": false, 00:08:57.045 "data_offset": 0, 00:08:57.045 "data_size": 0 00:08:57.045 } 00:08:57.045 ] 00:08:57.045 }' 00:08:57.045 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.045 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.305 [2024-12-06 11:51:54.896348] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:57.305 BaseBdev3 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.305 [ 00:08:57.305 { 00:08:57.305 "name": "BaseBdev3", 00:08:57.305 "aliases": [ 00:08:57.305 "498af67f-2fc3-47f4-9a91-cf0d079355d2" 00:08:57.305 ], 00:08:57.305 "product_name": "Malloc disk", 00:08:57.305 "block_size": 512, 00:08:57.305 "num_blocks": 65536, 00:08:57.305 "uuid": "498af67f-2fc3-47f4-9a91-cf0d079355d2", 00:08:57.305 "assigned_rate_limits": { 00:08:57.305 "rw_ios_per_sec": 0, 00:08:57.305 "rw_mbytes_per_sec": 0, 00:08:57.305 "r_mbytes_per_sec": 0, 00:08:57.305 "w_mbytes_per_sec": 0 00:08:57.305 }, 00:08:57.305 "claimed": true, 00:08:57.305 "claim_type": "exclusive_write", 00:08:57.305 "zoned": false, 00:08:57.305 "supported_io_types": { 00:08:57.305 "read": true, 00:08:57.305 "write": true, 00:08:57.305 "unmap": true, 00:08:57.305 "flush": true, 00:08:57.305 "reset": true, 00:08:57.305 "nvme_admin": false, 00:08:57.305 "nvme_io": false, 00:08:57.305 "nvme_io_md": false, 00:08:57.305 "write_zeroes": true, 00:08:57.305 "zcopy": true, 00:08:57.305 "get_zone_info": false, 00:08:57.305 "zone_management": false, 00:08:57.305 "zone_append": false, 00:08:57.305 "compare": false, 00:08:57.305 "compare_and_write": false, 00:08:57.305 "abort": true, 00:08:57.305 "seek_hole": false, 00:08:57.305 "seek_data": false, 00:08:57.305 "copy": true, 00:08:57.305 "nvme_iov_md": false 00:08:57.305 }, 00:08:57.305 "memory_domains": [ 00:08:57.305 { 00:08:57.305 "dma_device_id": "system", 00:08:57.305 "dma_device_type": 1 00:08:57.305 }, 00:08:57.305 { 00:08:57.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.305 "dma_device_type": 2 00:08:57.305 } 00:08:57.305 ], 00:08:57.305 "driver_specific": {} 00:08:57.305 } 00:08:57.305 ] 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.305 "name": "Existed_Raid", 00:08:57.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.305 "strip_size_kb": 64, 00:08:57.305 "state": "configuring", 00:08:57.305 "raid_level": "raid0", 00:08:57.305 "superblock": false, 00:08:57.305 "num_base_bdevs": 4, 00:08:57.305 "num_base_bdevs_discovered": 3, 00:08:57.305 "num_base_bdevs_operational": 4, 00:08:57.305 "base_bdevs_list": [ 00:08:57.305 { 00:08:57.305 "name": "BaseBdev1", 00:08:57.305 "uuid": "1a937ee2-a2a0-4a0b-b5b2-9d8c9b89d27a", 00:08:57.305 "is_configured": true, 00:08:57.305 "data_offset": 0, 00:08:57.305 "data_size": 65536 00:08:57.305 }, 00:08:57.305 { 00:08:57.305 "name": "BaseBdev2", 00:08:57.305 "uuid": "d1f29516-4708-4234-9676-9930cdc4576e", 00:08:57.305 "is_configured": true, 00:08:57.305 "data_offset": 0, 00:08:57.305 "data_size": 65536 00:08:57.305 }, 00:08:57.305 { 00:08:57.305 "name": "BaseBdev3", 00:08:57.305 "uuid": "498af67f-2fc3-47f4-9a91-cf0d079355d2", 00:08:57.305 "is_configured": true, 00:08:57.305 "data_offset": 0, 00:08:57.305 "data_size": 65536 00:08:57.305 }, 00:08:57.305 { 00:08:57.305 "name": "BaseBdev4", 00:08:57.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.305 "is_configured": false, 00:08:57.305 "data_offset": 0, 00:08:57.305 "data_size": 0 00:08:57.305 } 00:08:57.305 ] 00:08:57.305 }' 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.305 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.873 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:08:57.873 11:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.873 11:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.873 [2024-12-06 11:51:55.362455] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:57.873 [2024-12-06 11:51:55.362549] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:57.873 [2024-12-06 11:51:55.362570] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:08:57.873 [2024-12-06 11:51:55.362854] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:57.873 [2024-12-06 11:51:55.362982] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:57.874 [2024-12-06 11:51:55.363001] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:57.874 [2024-12-06 11:51:55.363196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.874 BaseBdev4 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.874 [ 00:08:57.874 { 00:08:57.874 "name": "BaseBdev4", 00:08:57.874 "aliases": [ 00:08:57.874 "6e730478-88b6-402e-831e-b6f96cc99369" 00:08:57.874 ], 00:08:57.874 "product_name": "Malloc disk", 00:08:57.874 "block_size": 512, 00:08:57.874 "num_blocks": 65536, 00:08:57.874 "uuid": "6e730478-88b6-402e-831e-b6f96cc99369", 00:08:57.874 "assigned_rate_limits": { 00:08:57.874 "rw_ios_per_sec": 0, 00:08:57.874 "rw_mbytes_per_sec": 0, 00:08:57.874 "r_mbytes_per_sec": 0, 00:08:57.874 "w_mbytes_per_sec": 0 00:08:57.874 }, 00:08:57.874 "claimed": true, 00:08:57.874 "claim_type": "exclusive_write", 00:08:57.874 "zoned": false, 00:08:57.874 "supported_io_types": { 00:08:57.874 "read": true, 00:08:57.874 "write": true, 00:08:57.874 "unmap": true, 00:08:57.874 "flush": true, 00:08:57.874 "reset": true, 00:08:57.874 "nvme_admin": false, 00:08:57.874 "nvme_io": false, 00:08:57.874 "nvme_io_md": false, 00:08:57.874 "write_zeroes": true, 00:08:57.874 "zcopy": true, 00:08:57.874 "get_zone_info": false, 00:08:57.874 "zone_management": false, 00:08:57.874 "zone_append": false, 00:08:57.874 "compare": false, 00:08:57.874 "compare_and_write": false, 00:08:57.874 "abort": true, 00:08:57.874 "seek_hole": false, 00:08:57.874 "seek_data": false, 00:08:57.874 "copy": true, 00:08:57.874 "nvme_iov_md": false 00:08:57.874 }, 00:08:57.874 "memory_domains": [ 00:08:57.874 { 00:08:57.874 "dma_device_id": "system", 00:08:57.874 "dma_device_type": 1 00:08:57.874 }, 00:08:57.874 { 00:08:57.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.874 "dma_device_type": 2 00:08:57.874 } 00:08:57.874 ], 00:08:57.874 "driver_specific": {} 00:08:57.874 } 00:08:57.874 ] 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.874 "name": "Existed_Raid", 00:08:57.874 "uuid": "2967c55a-b00b-46b0-96e0-b8c9c6e2e70c", 00:08:57.874 "strip_size_kb": 64, 00:08:57.874 "state": "online", 00:08:57.874 "raid_level": "raid0", 00:08:57.874 "superblock": false, 00:08:57.874 "num_base_bdevs": 4, 00:08:57.874 "num_base_bdevs_discovered": 4, 00:08:57.874 "num_base_bdevs_operational": 4, 00:08:57.874 "base_bdevs_list": [ 00:08:57.874 { 00:08:57.874 "name": "BaseBdev1", 00:08:57.874 "uuid": "1a937ee2-a2a0-4a0b-b5b2-9d8c9b89d27a", 00:08:57.874 "is_configured": true, 00:08:57.874 "data_offset": 0, 00:08:57.874 "data_size": 65536 00:08:57.874 }, 00:08:57.874 { 00:08:57.874 "name": "BaseBdev2", 00:08:57.874 "uuid": "d1f29516-4708-4234-9676-9930cdc4576e", 00:08:57.874 "is_configured": true, 00:08:57.874 "data_offset": 0, 00:08:57.874 "data_size": 65536 00:08:57.874 }, 00:08:57.874 { 00:08:57.874 "name": "BaseBdev3", 00:08:57.874 "uuid": "498af67f-2fc3-47f4-9a91-cf0d079355d2", 00:08:57.874 "is_configured": true, 00:08:57.874 "data_offset": 0, 00:08:57.874 "data_size": 65536 00:08:57.874 }, 00:08:57.874 { 00:08:57.874 "name": "BaseBdev4", 00:08:57.874 "uuid": "6e730478-88b6-402e-831e-b6f96cc99369", 00:08:57.874 "is_configured": true, 00:08:57.874 "data_offset": 0, 00:08:57.874 "data_size": 65536 00:08:57.874 } 00:08:57.874 ] 00:08:57.874 }' 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.874 11:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.133 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:58.133 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:58.133 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:58.133 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:58.133 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:58.133 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:58.133 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:58.133 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:58.134 11:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.134 11:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.134 [2024-12-06 11:51:55.810013] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:58.134 11:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.134 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:58.134 "name": "Existed_Raid", 00:08:58.134 "aliases": [ 00:08:58.134 "2967c55a-b00b-46b0-96e0-b8c9c6e2e70c" 00:08:58.134 ], 00:08:58.134 "product_name": "Raid Volume", 00:08:58.134 "block_size": 512, 00:08:58.134 "num_blocks": 262144, 00:08:58.134 "uuid": "2967c55a-b00b-46b0-96e0-b8c9c6e2e70c", 00:08:58.134 "assigned_rate_limits": { 00:08:58.134 "rw_ios_per_sec": 0, 00:08:58.134 "rw_mbytes_per_sec": 0, 00:08:58.134 "r_mbytes_per_sec": 0, 00:08:58.134 "w_mbytes_per_sec": 0 00:08:58.134 }, 00:08:58.134 "claimed": false, 00:08:58.134 "zoned": false, 00:08:58.134 "supported_io_types": { 00:08:58.134 "read": true, 00:08:58.134 "write": true, 00:08:58.134 "unmap": true, 00:08:58.134 "flush": true, 00:08:58.134 "reset": true, 00:08:58.134 "nvme_admin": false, 00:08:58.134 "nvme_io": false, 00:08:58.134 "nvme_io_md": false, 00:08:58.134 "write_zeroes": true, 00:08:58.134 "zcopy": false, 00:08:58.134 "get_zone_info": false, 00:08:58.134 "zone_management": false, 00:08:58.134 "zone_append": false, 00:08:58.134 "compare": false, 00:08:58.134 "compare_and_write": false, 00:08:58.134 "abort": false, 00:08:58.134 "seek_hole": false, 00:08:58.134 "seek_data": false, 00:08:58.134 "copy": false, 00:08:58.134 "nvme_iov_md": false 00:08:58.134 }, 00:08:58.134 "memory_domains": [ 00:08:58.134 { 00:08:58.134 "dma_device_id": "system", 00:08:58.134 "dma_device_type": 1 00:08:58.134 }, 00:08:58.134 { 00:08:58.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.134 "dma_device_type": 2 00:08:58.134 }, 00:08:58.134 { 00:08:58.134 "dma_device_id": "system", 00:08:58.134 "dma_device_type": 1 00:08:58.134 }, 00:08:58.134 { 00:08:58.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.134 "dma_device_type": 2 00:08:58.134 }, 00:08:58.134 { 00:08:58.134 "dma_device_id": "system", 00:08:58.134 "dma_device_type": 1 00:08:58.134 }, 00:08:58.134 { 00:08:58.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.134 "dma_device_type": 2 00:08:58.134 }, 00:08:58.134 { 00:08:58.134 "dma_device_id": "system", 00:08:58.134 "dma_device_type": 1 00:08:58.134 }, 00:08:58.134 { 00:08:58.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.134 "dma_device_type": 2 00:08:58.134 } 00:08:58.134 ], 00:08:58.134 "driver_specific": { 00:08:58.134 "raid": { 00:08:58.134 "uuid": "2967c55a-b00b-46b0-96e0-b8c9c6e2e70c", 00:08:58.134 "strip_size_kb": 64, 00:08:58.134 "state": "online", 00:08:58.134 "raid_level": "raid0", 00:08:58.134 "superblock": false, 00:08:58.134 "num_base_bdevs": 4, 00:08:58.134 "num_base_bdevs_discovered": 4, 00:08:58.134 "num_base_bdevs_operational": 4, 00:08:58.134 "base_bdevs_list": [ 00:08:58.134 { 00:08:58.134 "name": "BaseBdev1", 00:08:58.134 "uuid": "1a937ee2-a2a0-4a0b-b5b2-9d8c9b89d27a", 00:08:58.134 "is_configured": true, 00:08:58.134 "data_offset": 0, 00:08:58.134 "data_size": 65536 00:08:58.134 }, 00:08:58.134 { 00:08:58.134 "name": "BaseBdev2", 00:08:58.134 "uuid": "d1f29516-4708-4234-9676-9930cdc4576e", 00:08:58.134 "is_configured": true, 00:08:58.134 "data_offset": 0, 00:08:58.134 "data_size": 65536 00:08:58.134 }, 00:08:58.134 { 00:08:58.134 "name": "BaseBdev3", 00:08:58.134 "uuid": "498af67f-2fc3-47f4-9a91-cf0d079355d2", 00:08:58.134 "is_configured": true, 00:08:58.134 "data_offset": 0, 00:08:58.134 "data_size": 65536 00:08:58.134 }, 00:08:58.134 { 00:08:58.134 "name": "BaseBdev4", 00:08:58.134 "uuid": "6e730478-88b6-402e-831e-b6f96cc99369", 00:08:58.134 "is_configured": true, 00:08:58.134 "data_offset": 0, 00:08:58.134 "data_size": 65536 00:08:58.134 } 00:08:58.134 ] 00:08:58.134 } 00:08:58.134 } 00:08:58.134 }' 00:08:58.134 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:58.134 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:58.134 BaseBdev2 00:08:58.134 BaseBdev3 00:08:58.134 BaseBdev4' 00:08:58.134 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.394 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:58.394 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.394 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:58.394 11:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.394 11:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.394 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.394 11:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.394 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.394 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.394 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.394 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.394 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:58.394 11:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.394 11:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.394 [2024-12-06 11:51:56.113227] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:58.394 [2024-12-06 11:51:56.113309] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:58.394 [2024-12-06 11:51:56.113426] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.394 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.654 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.655 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.655 "name": "Existed_Raid", 00:08:58.655 "uuid": "2967c55a-b00b-46b0-96e0-b8c9c6e2e70c", 00:08:58.655 "strip_size_kb": 64, 00:08:58.655 "state": "offline", 00:08:58.655 "raid_level": "raid0", 00:08:58.655 "superblock": false, 00:08:58.655 "num_base_bdevs": 4, 00:08:58.655 "num_base_bdevs_discovered": 3, 00:08:58.655 "num_base_bdevs_operational": 3, 00:08:58.655 "base_bdevs_list": [ 00:08:58.655 { 00:08:58.655 "name": null, 00:08:58.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.655 "is_configured": false, 00:08:58.655 "data_offset": 0, 00:08:58.655 "data_size": 65536 00:08:58.655 }, 00:08:58.655 { 00:08:58.655 "name": "BaseBdev2", 00:08:58.655 "uuid": "d1f29516-4708-4234-9676-9930cdc4576e", 00:08:58.655 "is_configured": true, 00:08:58.655 "data_offset": 0, 00:08:58.655 "data_size": 65536 00:08:58.655 }, 00:08:58.655 { 00:08:58.655 "name": "BaseBdev3", 00:08:58.655 "uuid": "498af67f-2fc3-47f4-9a91-cf0d079355d2", 00:08:58.655 "is_configured": true, 00:08:58.655 "data_offset": 0, 00:08:58.655 "data_size": 65536 00:08:58.655 }, 00:08:58.655 { 00:08:58.655 "name": "BaseBdev4", 00:08:58.655 "uuid": "6e730478-88b6-402e-831e-b6f96cc99369", 00:08:58.655 "is_configured": true, 00:08:58.655 "data_offset": 0, 00:08:58.655 "data_size": 65536 00:08:58.655 } 00:08:58.655 ] 00:08:58.655 }' 00:08:58.655 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.655 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.914 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:58.914 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:58.914 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:58.914 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.914 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.914 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.915 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.915 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:58.915 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:58.915 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:58.915 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.915 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.915 [2024-12-06 11:51:56.651754] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:58.915 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.915 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:58.915 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:58.915 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.915 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:59.175 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.175 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.175 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.175 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:59.175 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:59.175 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:59.175 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.175 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.175 [2024-12-06 11:51:56.722797] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:59.175 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.175 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:59.175 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:59.175 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.175 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:59.175 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.175 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.175 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.175 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:59.175 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:59.175 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:08:59.175 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.175 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.175 [2024-12-06 11:51:56.785820] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:08:59.175 [2024-12-06 11:51:56.785900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:59.175 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.175 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:59.175 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:59.175 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.175 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.175 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.176 BaseBdev2 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.176 [ 00:08:59.176 { 00:08:59.176 "name": "BaseBdev2", 00:08:59.176 "aliases": [ 00:08:59.176 "8eedae87-53ba-4f2d-9c4c-303ab3cbe3dd" 00:08:59.176 ], 00:08:59.176 "product_name": "Malloc disk", 00:08:59.176 "block_size": 512, 00:08:59.176 "num_blocks": 65536, 00:08:59.176 "uuid": "8eedae87-53ba-4f2d-9c4c-303ab3cbe3dd", 00:08:59.176 "assigned_rate_limits": { 00:08:59.176 "rw_ios_per_sec": 0, 00:08:59.176 "rw_mbytes_per_sec": 0, 00:08:59.176 "r_mbytes_per_sec": 0, 00:08:59.176 "w_mbytes_per_sec": 0 00:08:59.176 }, 00:08:59.176 "claimed": false, 00:08:59.176 "zoned": false, 00:08:59.176 "supported_io_types": { 00:08:59.176 "read": true, 00:08:59.176 "write": true, 00:08:59.176 "unmap": true, 00:08:59.176 "flush": true, 00:08:59.176 "reset": true, 00:08:59.176 "nvme_admin": false, 00:08:59.176 "nvme_io": false, 00:08:59.176 "nvme_io_md": false, 00:08:59.176 "write_zeroes": true, 00:08:59.176 "zcopy": true, 00:08:59.176 "get_zone_info": false, 00:08:59.176 "zone_management": false, 00:08:59.176 "zone_append": false, 00:08:59.176 "compare": false, 00:08:59.176 "compare_and_write": false, 00:08:59.176 "abort": true, 00:08:59.176 "seek_hole": false, 00:08:59.176 "seek_data": false, 00:08:59.176 "copy": true, 00:08:59.176 "nvme_iov_md": false 00:08:59.176 }, 00:08:59.176 "memory_domains": [ 00:08:59.176 { 00:08:59.176 "dma_device_id": "system", 00:08:59.176 "dma_device_type": 1 00:08:59.176 }, 00:08:59.176 { 00:08:59.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.176 "dma_device_type": 2 00:08:59.176 } 00:08:59.176 ], 00:08:59.176 "driver_specific": {} 00:08:59.176 } 00:08:59.176 ] 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.176 BaseBdev3 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.176 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.437 [ 00:08:59.437 { 00:08:59.437 "name": "BaseBdev3", 00:08:59.437 "aliases": [ 00:08:59.437 "26e4e3b5-0fdf-4e60-b914-69834c6b1f50" 00:08:59.437 ], 00:08:59.437 "product_name": "Malloc disk", 00:08:59.437 "block_size": 512, 00:08:59.437 "num_blocks": 65536, 00:08:59.437 "uuid": "26e4e3b5-0fdf-4e60-b914-69834c6b1f50", 00:08:59.437 "assigned_rate_limits": { 00:08:59.437 "rw_ios_per_sec": 0, 00:08:59.437 "rw_mbytes_per_sec": 0, 00:08:59.437 "r_mbytes_per_sec": 0, 00:08:59.437 "w_mbytes_per_sec": 0 00:08:59.437 }, 00:08:59.437 "claimed": false, 00:08:59.437 "zoned": false, 00:08:59.437 "supported_io_types": { 00:08:59.437 "read": true, 00:08:59.437 "write": true, 00:08:59.437 "unmap": true, 00:08:59.437 "flush": true, 00:08:59.437 "reset": true, 00:08:59.437 "nvme_admin": false, 00:08:59.437 "nvme_io": false, 00:08:59.437 "nvme_io_md": false, 00:08:59.437 "write_zeroes": true, 00:08:59.437 "zcopy": true, 00:08:59.437 "get_zone_info": false, 00:08:59.437 "zone_management": false, 00:08:59.437 "zone_append": false, 00:08:59.437 "compare": false, 00:08:59.437 "compare_and_write": false, 00:08:59.437 "abort": true, 00:08:59.437 "seek_hole": false, 00:08:59.437 "seek_data": false, 00:08:59.437 "copy": true, 00:08:59.437 "nvme_iov_md": false 00:08:59.437 }, 00:08:59.437 "memory_domains": [ 00:08:59.437 { 00:08:59.437 "dma_device_id": "system", 00:08:59.437 "dma_device_type": 1 00:08:59.437 }, 00:08:59.437 { 00:08:59.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.437 "dma_device_type": 2 00:08:59.437 } 00:08:59.437 ], 00:08:59.437 "driver_specific": {} 00:08:59.437 } 00:08:59.437 ] 00:08:59.437 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.437 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:59.437 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:59.437 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:59.437 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:08:59.437 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.437 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.437 BaseBdev4 00:08:59.437 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.437 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:08:59.437 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:08:59.437 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:59.437 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:59.437 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:59.437 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:59.437 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:59.437 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.437 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.437 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.437 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:59.437 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.437 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.437 [ 00:08:59.437 { 00:08:59.437 "name": "BaseBdev4", 00:08:59.437 "aliases": [ 00:08:59.437 "ebfa667c-ae6b-47e8-ab98-d263242c0fcf" 00:08:59.437 ], 00:08:59.437 "product_name": "Malloc disk", 00:08:59.437 "block_size": 512, 00:08:59.437 "num_blocks": 65536, 00:08:59.437 "uuid": "ebfa667c-ae6b-47e8-ab98-d263242c0fcf", 00:08:59.437 "assigned_rate_limits": { 00:08:59.437 "rw_ios_per_sec": 0, 00:08:59.437 "rw_mbytes_per_sec": 0, 00:08:59.437 "r_mbytes_per_sec": 0, 00:08:59.437 "w_mbytes_per_sec": 0 00:08:59.437 }, 00:08:59.437 "claimed": false, 00:08:59.437 "zoned": false, 00:08:59.437 "supported_io_types": { 00:08:59.437 "read": true, 00:08:59.437 "write": true, 00:08:59.437 "unmap": true, 00:08:59.437 "flush": true, 00:08:59.437 "reset": true, 00:08:59.437 "nvme_admin": false, 00:08:59.437 "nvme_io": false, 00:08:59.437 "nvme_io_md": false, 00:08:59.437 "write_zeroes": true, 00:08:59.437 "zcopy": true, 00:08:59.437 "get_zone_info": false, 00:08:59.437 "zone_management": false, 00:08:59.437 "zone_append": false, 00:08:59.437 "compare": false, 00:08:59.437 "compare_and_write": false, 00:08:59.437 "abort": true, 00:08:59.437 "seek_hole": false, 00:08:59.437 "seek_data": false, 00:08:59.437 "copy": true, 00:08:59.437 "nvme_iov_md": false 00:08:59.437 }, 00:08:59.437 "memory_domains": [ 00:08:59.437 { 00:08:59.437 "dma_device_id": "system", 00:08:59.437 "dma_device_type": 1 00:08:59.437 }, 00:08:59.437 { 00:08:59.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.437 "dma_device_type": 2 00:08:59.437 } 00:08:59.437 ], 00:08:59.437 "driver_specific": {} 00:08:59.437 } 00:08:59.437 ] 00:08:59.437 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.437 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:59.437 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:59.437 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:59.437 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:59.437 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.437 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.437 [2024-12-06 11:51:57.012744] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:59.437 [2024-12-06 11:51:57.012828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:59.437 [2024-12-06 11:51:57.012866] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:59.437 [2024-12-06 11:51:57.014633] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:59.437 [2024-12-06 11:51:57.014714] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:59.437 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.437 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:59.437 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.437 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.437 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.437 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.437 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:59.437 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.437 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.437 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.437 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.437 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.437 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.437 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.438 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.438 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.438 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.438 "name": "Existed_Raid", 00:08:59.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.438 "strip_size_kb": 64, 00:08:59.438 "state": "configuring", 00:08:59.438 "raid_level": "raid0", 00:08:59.438 "superblock": false, 00:08:59.438 "num_base_bdevs": 4, 00:08:59.438 "num_base_bdevs_discovered": 3, 00:08:59.438 "num_base_bdevs_operational": 4, 00:08:59.438 "base_bdevs_list": [ 00:08:59.438 { 00:08:59.438 "name": "BaseBdev1", 00:08:59.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.438 "is_configured": false, 00:08:59.438 "data_offset": 0, 00:08:59.438 "data_size": 0 00:08:59.438 }, 00:08:59.438 { 00:08:59.438 "name": "BaseBdev2", 00:08:59.438 "uuid": "8eedae87-53ba-4f2d-9c4c-303ab3cbe3dd", 00:08:59.438 "is_configured": true, 00:08:59.438 "data_offset": 0, 00:08:59.438 "data_size": 65536 00:08:59.438 }, 00:08:59.438 { 00:08:59.438 "name": "BaseBdev3", 00:08:59.438 "uuid": "26e4e3b5-0fdf-4e60-b914-69834c6b1f50", 00:08:59.438 "is_configured": true, 00:08:59.438 "data_offset": 0, 00:08:59.438 "data_size": 65536 00:08:59.438 }, 00:08:59.438 { 00:08:59.438 "name": "BaseBdev4", 00:08:59.438 "uuid": "ebfa667c-ae6b-47e8-ab98-d263242c0fcf", 00:08:59.438 "is_configured": true, 00:08:59.438 "data_offset": 0, 00:08:59.438 "data_size": 65536 00:08:59.438 } 00:08:59.438 ] 00:08:59.438 }' 00:08:59.438 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.438 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.008 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:00.008 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.008 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.008 [2024-12-06 11:51:57.463932] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:00.008 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.008 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:00.008 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.008 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.008 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.008 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.008 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:00.008 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.008 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.008 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.008 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.008 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.008 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.008 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.008 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.008 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.008 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.008 "name": "Existed_Raid", 00:09:00.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.008 "strip_size_kb": 64, 00:09:00.008 "state": "configuring", 00:09:00.008 "raid_level": "raid0", 00:09:00.008 "superblock": false, 00:09:00.008 "num_base_bdevs": 4, 00:09:00.008 "num_base_bdevs_discovered": 2, 00:09:00.008 "num_base_bdevs_operational": 4, 00:09:00.008 "base_bdevs_list": [ 00:09:00.008 { 00:09:00.008 "name": "BaseBdev1", 00:09:00.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.008 "is_configured": false, 00:09:00.008 "data_offset": 0, 00:09:00.008 "data_size": 0 00:09:00.008 }, 00:09:00.008 { 00:09:00.008 "name": null, 00:09:00.008 "uuid": "8eedae87-53ba-4f2d-9c4c-303ab3cbe3dd", 00:09:00.008 "is_configured": false, 00:09:00.008 "data_offset": 0, 00:09:00.008 "data_size": 65536 00:09:00.008 }, 00:09:00.008 { 00:09:00.008 "name": "BaseBdev3", 00:09:00.008 "uuid": "26e4e3b5-0fdf-4e60-b914-69834c6b1f50", 00:09:00.008 "is_configured": true, 00:09:00.008 "data_offset": 0, 00:09:00.008 "data_size": 65536 00:09:00.008 }, 00:09:00.008 { 00:09:00.008 "name": "BaseBdev4", 00:09:00.008 "uuid": "ebfa667c-ae6b-47e8-ab98-d263242c0fcf", 00:09:00.008 "is_configured": true, 00:09:00.008 "data_offset": 0, 00:09:00.008 "data_size": 65536 00:09:00.008 } 00:09:00.008 ] 00:09:00.008 }' 00:09:00.008 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.008 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.269 [2024-12-06 11:51:57.913993] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:00.269 BaseBdev1 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.269 [ 00:09:00.269 { 00:09:00.269 "name": "BaseBdev1", 00:09:00.269 "aliases": [ 00:09:00.269 "939864bc-add8-4691-a378-4f88934ca6ae" 00:09:00.269 ], 00:09:00.269 "product_name": "Malloc disk", 00:09:00.269 "block_size": 512, 00:09:00.269 "num_blocks": 65536, 00:09:00.269 "uuid": "939864bc-add8-4691-a378-4f88934ca6ae", 00:09:00.269 "assigned_rate_limits": { 00:09:00.269 "rw_ios_per_sec": 0, 00:09:00.269 "rw_mbytes_per_sec": 0, 00:09:00.269 "r_mbytes_per_sec": 0, 00:09:00.269 "w_mbytes_per_sec": 0 00:09:00.269 }, 00:09:00.269 "claimed": true, 00:09:00.269 "claim_type": "exclusive_write", 00:09:00.269 "zoned": false, 00:09:00.269 "supported_io_types": { 00:09:00.269 "read": true, 00:09:00.269 "write": true, 00:09:00.269 "unmap": true, 00:09:00.269 "flush": true, 00:09:00.269 "reset": true, 00:09:00.269 "nvme_admin": false, 00:09:00.269 "nvme_io": false, 00:09:00.269 "nvme_io_md": false, 00:09:00.269 "write_zeroes": true, 00:09:00.269 "zcopy": true, 00:09:00.269 "get_zone_info": false, 00:09:00.269 "zone_management": false, 00:09:00.269 "zone_append": false, 00:09:00.269 "compare": false, 00:09:00.269 "compare_and_write": false, 00:09:00.269 "abort": true, 00:09:00.269 "seek_hole": false, 00:09:00.269 "seek_data": false, 00:09:00.269 "copy": true, 00:09:00.269 "nvme_iov_md": false 00:09:00.269 }, 00:09:00.269 "memory_domains": [ 00:09:00.269 { 00:09:00.269 "dma_device_id": "system", 00:09:00.269 "dma_device_type": 1 00:09:00.269 }, 00:09:00.269 { 00:09:00.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.269 "dma_device_type": 2 00:09:00.269 } 00:09:00.269 ], 00:09:00.269 "driver_specific": {} 00:09:00.269 } 00:09:00.269 ] 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.269 "name": "Existed_Raid", 00:09:00.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.269 "strip_size_kb": 64, 00:09:00.269 "state": "configuring", 00:09:00.269 "raid_level": "raid0", 00:09:00.269 "superblock": false, 00:09:00.269 "num_base_bdevs": 4, 00:09:00.269 "num_base_bdevs_discovered": 3, 00:09:00.269 "num_base_bdevs_operational": 4, 00:09:00.269 "base_bdevs_list": [ 00:09:00.269 { 00:09:00.269 "name": "BaseBdev1", 00:09:00.269 "uuid": "939864bc-add8-4691-a378-4f88934ca6ae", 00:09:00.269 "is_configured": true, 00:09:00.269 "data_offset": 0, 00:09:00.269 "data_size": 65536 00:09:00.269 }, 00:09:00.269 { 00:09:00.269 "name": null, 00:09:00.269 "uuid": "8eedae87-53ba-4f2d-9c4c-303ab3cbe3dd", 00:09:00.269 "is_configured": false, 00:09:00.269 "data_offset": 0, 00:09:00.269 "data_size": 65536 00:09:00.269 }, 00:09:00.269 { 00:09:00.269 "name": "BaseBdev3", 00:09:00.269 "uuid": "26e4e3b5-0fdf-4e60-b914-69834c6b1f50", 00:09:00.269 "is_configured": true, 00:09:00.269 "data_offset": 0, 00:09:00.269 "data_size": 65536 00:09:00.269 }, 00:09:00.269 { 00:09:00.269 "name": "BaseBdev4", 00:09:00.269 "uuid": "ebfa667c-ae6b-47e8-ab98-d263242c0fcf", 00:09:00.269 "is_configured": true, 00:09:00.269 "data_offset": 0, 00:09:00.269 "data_size": 65536 00:09:00.269 } 00:09:00.269 ] 00:09:00.269 }' 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.269 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.839 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.839 11:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.839 11:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.839 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:00.839 11:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.839 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:00.839 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:00.839 11:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.839 11:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.839 [2024-12-06 11:51:58.409194] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:00.839 11:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.839 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:00.839 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.839 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.839 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.839 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.839 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:00.839 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.839 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.839 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.839 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.839 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.839 11:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.839 11:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.839 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.839 11:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.839 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.839 "name": "Existed_Raid", 00:09:00.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.839 "strip_size_kb": 64, 00:09:00.839 "state": "configuring", 00:09:00.839 "raid_level": "raid0", 00:09:00.839 "superblock": false, 00:09:00.839 "num_base_bdevs": 4, 00:09:00.839 "num_base_bdevs_discovered": 2, 00:09:00.839 "num_base_bdevs_operational": 4, 00:09:00.839 "base_bdevs_list": [ 00:09:00.839 { 00:09:00.839 "name": "BaseBdev1", 00:09:00.839 "uuid": "939864bc-add8-4691-a378-4f88934ca6ae", 00:09:00.839 "is_configured": true, 00:09:00.839 "data_offset": 0, 00:09:00.839 "data_size": 65536 00:09:00.839 }, 00:09:00.839 { 00:09:00.839 "name": null, 00:09:00.839 "uuid": "8eedae87-53ba-4f2d-9c4c-303ab3cbe3dd", 00:09:00.839 "is_configured": false, 00:09:00.839 "data_offset": 0, 00:09:00.839 "data_size": 65536 00:09:00.839 }, 00:09:00.839 { 00:09:00.839 "name": null, 00:09:00.839 "uuid": "26e4e3b5-0fdf-4e60-b914-69834c6b1f50", 00:09:00.839 "is_configured": false, 00:09:00.839 "data_offset": 0, 00:09:00.839 "data_size": 65536 00:09:00.839 }, 00:09:00.839 { 00:09:00.839 "name": "BaseBdev4", 00:09:00.839 "uuid": "ebfa667c-ae6b-47e8-ab98-d263242c0fcf", 00:09:00.839 "is_configured": true, 00:09:00.839 "data_offset": 0, 00:09:00.839 "data_size": 65536 00:09:00.839 } 00:09:00.839 ] 00:09:00.839 }' 00:09:00.839 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.839 11:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.099 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:01.099 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.099 11:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.099 11:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.099 11:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.099 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:01.099 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:01.099 11:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.099 11:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.359 [2024-12-06 11:51:58.856487] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:01.359 11:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.359 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:01.359 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.359 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.359 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.359 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.359 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:01.359 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.359 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.359 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.359 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.359 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.359 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.359 11:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.359 11:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.359 11:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.359 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.359 "name": "Existed_Raid", 00:09:01.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.360 "strip_size_kb": 64, 00:09:01.360 "state": "configuring", 00:09:01.360 "raid_level": "raid0", 00:09:01.360 "superblock": false, 00:09:01.360 "num_base_bdevs": 4, 00:09:01.360 "num_base_bdevs_discovered": 3, 00:09:01.360 "num_base_bdevs_operational": 4, 00:09:01.360 "base_bdevs_list": [ 00:09:01.360 { 00:09:01.360 "name": "BaseBdev1", 00:09:01.360 "uuid": "939864bc-add8-4691-a378-4f88934ca6ae", 00:09:01.360 "is_configured": true, 00:09:01.360 "data_offset": 0, 00:09:01.360 "data_size": 65536 00:09:01.360 }, 00:09:01.360 { 00:09:01.360 "name": null, 00:09:01.360 "uuid": "8eedae87-53ba-4f2d-9c4c-303ab3cbe3dd", 00:09:01.360 "is_configured": false, 00:09:01.360 "data_offset": 0, 00:09:01.360 "data_size": 65536 00:09:01.360 }, 00:09:01.360 { 00:09:01.360 "name": "BaseBdev3", 00:09:01.360 "uuid": "26e4e3b5-0fdf-4e60-b914-69834c6b1f50", 00:09:01.360 "is_configured": true, 00:09:01.360 "data_offset": 0, 00:09:01.360 "data_size": 65536 00:09:01.360 }, 00:09:01.360 { 00:09:01.360 "name": "BaseBdev4", 00:09:01.360 "uuid": "ebfa667c-ae6b-47e8-ab98-d263242c0fcf", 00:09:01.360 "is_configured": true, 00:09:01.360 "data_offset": 0, 00:09:01.360 "data_size": 65536 00:09:01.360 } 00:09:01.360 ] 00:09:01.360 }' 00:09:01.360 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.360 11:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.619 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.619 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:01.619 11:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.619 11:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.619 11:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.619 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:01.619 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:01.619 11:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.619 11:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.619 [2024-12-06 11:51:59.371589] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:01.879 11:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.879 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:01.879 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.879 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.879 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.879 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.879 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:01.879 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.879 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.879 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.879 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.879 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.879 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.879 11:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.879 11:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.879 11:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.879 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.879 "name": "Existed_Raid", 00:09:01.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.879 "strip_size_kb": 64, 00:09:01.879 "state": "configuring", 00:09:01.879 "raid_level": "raid0", 00:09:01.879 "superblock": false, 00:09:01.879 "num_base_bdevs": 4, 00:09:01.879 "num_base_bdevs_discovered": 2, 00:09:01.879 "num_base_bdevs_operational": 4, 00:09:01.879 "base_bdevs_list": [ 00:09:01.879 { 00:09:01.879 "name": null, 00:09:01.879 "uuid": "939864bc-add8-4691-a378-4f88934ca6ae", 00:09:01.879 "is_configured": false, 00:09:01.879 "data_offset": 0, 00:09:01.879 "data_size": 65536 00:09:01.879 }, 00:09:01.879 { 00:09:01.879 "name": null, 00:09:01.879 "uuid": "8eedae87-53ba-4f2d-9c4c-303ab3cbe3dd", 00:09:01.879 "is_configured": false, 00:09:01.879 "data_offset": 0, 00:09:01.879 "data_size": 65536 00:09:01.879 }, 00:09:01.879 { 00:09:01.879 "name": "BaseBdev3", 00:09:01.879 "uuid": "26e4e3b5-0fdf-4e60-b914-69834c6b1f50", 00:09:01.879 "is_configured": true, 00:09:01.879 "data_offset": 0, 00:09:01.879 "data_size": 65536 00:09:01.879 }, 00:09:01.879 { 00:09:01.879 "name": "BaseBdev4", 00:09:01.879 "uuid": "ebfa667c-ae6b-47e8-ab98-d263242c0fcf", 00:09:01.879 "is_configured": true, 00:09:01.879 "data_offset": 0, 00:09:01.879 "data_size": 65536 00:09:01.879 } 00:09:01.879 ] 00:09:01.879 }' 00:09:01.879 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.879 11:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.139 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.139 11:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.139 11:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.139 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:02.139 11:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.139 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:02.139 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:02.139 11:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.139 11:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.139 [2024-12-06 11:51:59.841196] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:02.139 11:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.139 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:02.139 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.139 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.139 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.139 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.139 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:02.139 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.139 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.139 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.139 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.139 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.139 11:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.139 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.139 11:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.139 11:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.399 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.399 "name": "Existed_Raid", 00:09:02.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.399 "strip_size_kb": 64, 00:09:02.399 "state": "configuring", 00:09:02.399 "raid_level": "raid0", 00:09:02.399 "superblock": false, 00:09:02.399 "num_base_bdevs": 4, 00:09:02.399 "num_base_bdevs_discovered": 3, 00:09:02.399 "num_base_bdevs_operational": 4, 00:09:02.399 "base_bdevs_list": [ 00:09:02.399 { 00:09:02.399 "name": null, 00:09:02.399 "uuid": "939864bc-add8-4691-a378-4f88934ca6ae", 00:09:02.399 "is_configured": false, 00:09:02.399 "data_offset": 0, 00:09:02.399 "data_size": 65536 00:09:02.399 }, 00:09:02.399 { 00:09:02.399 "name": "BaseBdev2", 00:09:02.399 "uuid": "8eedae87-53ba-4f2d-9c4c-303ab3cbe3dd", 00:09:02.399 "is_configured": true, 00:09:02.399 "data_offset": 0, 00:09:02.399 "data_size": 65536 00:09:02.399 }, 00:09:02.399 { 00:09:02.399 "name": "BaseBdev3", 00:09:02.399 "uuid": "26e4e3b5-0fdf-4e60-b914-69834c6b1f50", 00:09:02.399 "is_configured": true, 00:09:02.399 "data_offset": 0, 00:09:02.399 "data_size": 65536 00:09:02.399 }, 00:09:02.399 { 00:09:02.399 "name": "BaseBdev4", 00:09:02.399 "uuid": "ebfa667c-ae6b-47e8-ab98-d263242c0fcf", 00:09:02.399 "is_configured": true, 00:09:02.399 "data_offset": 0, 00:09:02.399 "data_size": 65536 00:09:02.399 } 00:09:02.399 ] 00:09:02.399 }' 00:09:02.399 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.399 11:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 939864bc-add8-4691-a378-4f88934ca6ae 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.659 [2024-12-06 11:52:00.319125] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:02.659 [2024-12-06 11:52:00.319240] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:02.659 [2024-12-06 11:52:00.319284] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:02.659 [2024-12-06 11:52:00.319589] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:09:02.659 [2024-12-06 11:52:00.319742] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:02.659 [2024-12-06 11:52:00.319781] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:02.659 [2024-12-06 11:52:00.319979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.659 NewBaseBdev 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.659 [ 00:09:02.659 { 00:09:02.659 "name": "NewBaseBdev", 00:09:02.659 "aliases": [ 00:09:02.659 "939864bc-add8-4691-a378-4f88934ca6ae" 00:09:02.659 ], 00:09:02.659 "product_name": "Malloc disk", 00:09:02.659 "block_size": 512, 00:09:02.659 "num_blocks": 65536, 00:09:02.659 "uuid": "939864bc-add8-4691-a378-4f88934ca6ae", 00:09:02.659 "assigned_rate_limits": { 00:09:02.659 "rw_ios_per_sec": 0, 00:09:02.659 "rw_mbytes_per_sec": 0, 00:09:02.659 "r_mbytes_per_sec": 0, 00:09:02.659 "w_mbytes_per_sec": 0 00:09:02.659 }, 00:09:02.659 "claimed": true, 00:09:02.659 "claim_type": "exclusive_write", 00:09:02.659 "zoned": false, 00:09:02.659 "supported_io_types": { 00:09:02.659 "read": true, 00:09:02.659 "write": true, 00:09:02.659 "unmap": true, 00:09:02.659 "flush": true, 00:09:02.659 "reset": true, 00:09:02.659 "nvme_admin": false, 00:09:02.659 "nvme_io": false, 00:09:02.659 "nvme_io_md": false, 00:09:02.659 "write_zeroes": true, 00:09:02.659 "zcopy": true, 00:09:02.659 "get_zone_info": false, 00:09:02.659 "zone_management": false, 00:09:02.659 "zone_append": false, 00:09:02.659 "compare": false, 00:09:02.659 "compare_and_write": false, 00:09:02.659 "abort": true, 00:09:02.659 "seek_hole": false, 00:09:02.659 "seek_data": false, 00:09:02.659 "copy": true, 00:09:02.659 "nvme_iov_md": false 00:09:02.659 }, 00:09:02.659 "memory_domains": [ 00:09:02.659 { 00:09:02.659 "dma_device_id": "system", 00:09:02.659 "dma_device_type": 1 00:09:02.659 }, 00:09:02.659 { 00:09:02.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.659 "dma_device_type": 2 00:09:02.659 } 00:09:02.659 ], 00:09:02.659 "driver_specific": {} 00:09:02.659 } 00:09:02.659 ] 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.659 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.659 "name": "Existed_Raid", 00:09:02.659 "uuid": "495ce5e7-a55c-4f6b-be25-80e425133647", 00:09:02.659 "strip_size_kb": 64, 00:09:02.659 "state": "online", 00:09:02.659 "raid_level": "raid0", 00:09:02.659 "superblock": false, 00:09:02.659 "num_base_bdevs": 4, 00:09:02.659 "num_base_bdevs_discovered": 4, 00:09:02.659 "num_base_bdevs_operational": 4, 00:09:02.659 "base_bdevs_list": [ 00:09:02.659 { 00:09:02.659 "name": "NewBaseBdev", 00:09:02.659 "uuid": "939864bc-add8-4691-a378-4f88934ca6ae", 00:09:02.659 "is_configured": true, 00:09:02.659 "data_offset": 0, 00:09:02.659 "data_size": 65536 00:09:02.659 }, 00:09:02.659 { 00:09:02.659 "name": "BaseBdev2", 00:09:02.659 "uuid": "8eedae87-53ba-4f2d-9c4c-303ab3cbe3dd", 00:09:02.659 "is_configured": true, 00:09:02.659 "data_offset": 0, 00:09:02.659 "data_size": 65536 00:09:02.660 }, 00:09:02.660 { 00:09:02.660 "name": "BaseBdev3", 00:09:02.660 "uuid": "26e4e3b5-0fdf-4e60-b914-69834c6b1f50", 00:09:02.660 "is_configured": true, 00:09:02.660 "data_offset": 0, 00:09:02.660 "data_size": 65536 00:09:02.660 }, 00:09:02.660 { 00:09:02.660 "name": "BaseBdev4", 00:09:02.660 "uuid": "ebfa667c-ae6b-47e8-ab98-d263242c0fcf", 00:09:02.660 "is_configured": true, 00:09:02.660 "data_offset": 0, 00:09:02.660 "data_size": 65536 00:09:02.660 } 00:09:02.660 ] 00:09:02.660 }' 00:09:02.660 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.660 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.229 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:03.229 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:03.229 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:03.229 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:03.229 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:03.229 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:03.229 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:03.229 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:03.229 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.229 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.229 [2024-12-06 11:52:00.822579] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:03.229 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.229 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:03.229 "name": "Existed_Raid", 00:09:03.229 "aliases": [ 00:09:03.229 "495ce5e7-a55c-4f6b-be25-80e425133647" 00:09:03.229 ], 00:09:03.229 "product_name": "Raid Volume", 00:09:03.229 "block_size": 512, 00:09:03.229 "num_blocks": 262144, 00:09:03.229 "uuid": "495ce5e7-a55c-4f6b-be25-80e425133647", 00:09:03.229 "assigned_rate_limits": { 00:09:03.229 "rw_ios_per_sec": 0, 00:09:03.229 "rw_mbytes_per_sec": 0, 00:09:03.229 "r_mbytes_per_sec": 0, 00:09:03.229 "w_mbytes_per_sec": 0 00:09:03.229 }, 00:09:03.229 "claimed": false, 00:09:03.229 "zoned": false, 00:09:03.229 "supported_io_types": { 00:09:03.229 "read": true, 00:09:03.229 "write": true, 00:09:03.229 "unmap": true, 00:09:03.229 "flush": true, 00:09:03.229 "reset": true, 00:09:03.229 "nvme_admin": false, 00:09:03.229 "nvme_io": false, 00:09:03.229 "nvme_io_md": false, 00:09:03.229 "write_zeroes": true, 00:09:03.229 "zcopy": false, 00:09:03.229 "get_zone_info": false, 00:09:03.229 "zone_management": false, 00:09:03.229 "zone_append": false, 00:09:03.229 "compare": false, 00:09:03.229 "compare_and_write": false, 00:09:03.229 "abort": false, 00:09:03.229 "seek_hole": false, 00:09:03.229 "seek_data": false, 00:09:03.229 "copy": false, 00:09:03.229 "nvme_iov_md": false 00:09:03.229 }, 00:09:03.229 "memory_domains": [ 00:09:03.229 { 00:09:03.229 "dma_device_id": "system", 00:09:03.229 "dma_device_type": 1 00:09:03.229 }, 00:09:03.229 { 00:09:03.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.229 "dma_device_type": 2 00:09:03.229 }, 00:09:03.229 { 00:09:03.229 "dma_device_id": "system", 00:09:03.229 "dma_device_type": 1 00:09:03.229 }, 00:09:03.229 { 00:09:03.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.229 "dma_device_type": 2 00:09:03.229 }, 00:09:03.229 { 00:09:03.229 "dma_device_id": "system", 00:09:03.229 "dma_device_type": 1 00:09:03.229 }, 00:09:03.229 { 00:09:03.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.229 "dma_device_type": 2 00:09:03.229 }, 00:09:03.229 { 00:09:03.229 "dma_device_id": "system", 00:09:03.229 "dma_device_type": 1 00:09:03.229 }, 00:09:03.229 { 00:09:03.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.229 "dma_device_type": 2 00:09:03.229 } 00:09:03.229 ], 00:09:03.229 "driver_specific": { 00:09:03.229 "raid": { 00:09:03.229 "uuid": "495ce5e7-a55c-4f6b-be25-80e425133647", 00:09:03.229 "strip_size_kb": 64, 00:09:03.229 "state": "online", 00:09:03.229 "raid_level": "raid0", 00:09:03.229 "superblock": false, 00:09:03.229 "num_base_bdevs": 4, 00:09:03.229 "num_base_bdevs_discovered": 4, 00:09:03.229 "num_base_bdevs_operational": 4, 00:09:03.229 "base_bdevs_list": [ 00:09:03.229 { 00:09:03.229 "name": "NewBaseBdev", 00:09:03.229 "uuid": "939864bc-add8-4691-a378-4f88934ca6ae", 00:09:03.229 "is_configured": true, 00:09:03.229 "data_offset": 0, 00:09:03.229 "data_size": 65536 00:09:03.229 }, 00:09:03.229 { 00:09:03.229 "name": "BaseBdev2", 00:09:03.229 "uuid": "8eedae87-53ba-4f2d-9c4c-303ab3cbe3dd", 00:09:03.229 "is_configured": true, 00:09:03.229 "data_offset": 0, 00:09:03.229 "data_size": 65536 00:09:03.229 }, 00:09:03.229 { 00:09:03.229 "name": "BaseBdev3", 00:09:03.229 "uuid": "26e4e3b5-0fdf-4e60-b914-69834c6b1f50", 00:09:03.229 "is_configured": true, 00:09:03.229 "data_offset": 0, 00:09:03.229 "data_size": 65536 00:09:03.229 }, 00:09:03.229 { 00:09:03.229 "name": "BaseBdev4", 00:09:03.229 "uuid": "ebfa667c-ae6b-47e8-ab98-d263242c0fcf", 00:09:03.229 "is_configured": true, 00:09:03.229 "data_offset": 0, 00:09:03.229 "data_size": 65536 00:09:03.229 } 00:09:03.229 ] 00:09:03.229 } 00:09:03.229 } 00:09:03.229 }' 00:09:03.229 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:03.229 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:03.229 BaseBdev2 00:09:03.229 BaseBdev3 00:09:03.229 BaseBdev4' 00:09:03.229 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.229 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:03.229 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.229 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.229 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:03.229 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.229 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.229 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.229 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.229 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.230 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.230 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:03.230 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.230 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.230 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.230 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.230 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.230 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.230 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.230 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:03.230 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.230 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.230 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.489 11:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.489 11:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.489 11:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.489 11:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.489 11:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:03.489 11:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.489 11:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.489 11:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.489 11:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.489 11:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.489 11:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.489 11:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:03.490 11:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.490 11:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.490 [2024-12-06 11:52:01.085826] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:03.490 [2024-12-06 11:52:01.085930] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:03.490 [2024-12-06 11:52:01.086012] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:03.490 [2024-12-06 11:52:01.086085] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:03.490 [2024-12-06 11:52:01.086150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:03.490 11:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.490 11:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80032 00:09:03.490 11:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 80032 ']' 00:09:03.490 11:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 80032 00:09:03.490 11:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:03.490 11:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:03.490 11:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80032 00:09:03.490 killing process with pid 80032 00:09:03.490 11:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:03.490 11:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:03.490 11:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80032' 00:09:03.490 11:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 80032 00:09:03.490 [2024-12-06 11:52:01.123939] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:03.490 11:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 80032 00:09:03.490 [2024-12-06 11:52:01.164643] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:03.750 00:09:03.750 real 0m9.262s 00:09:03.750 user 0m15.878s 00:09:03.750 sys 0m1.856s 00:09:03.750 ************************************ 00:09:03.750 END TEST raid_state_function_test 00:09:03.750 ************************************ 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.750 11:52:01 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:09:03.750 11:52:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:03.750 11:52:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.750 11:52:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:03.750 ************************************ 00:09:03.750 START TEST raid_state_function_test_sb 00:09:03.750 ************************************ 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80681 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80681' 00:09:03.750 Process raid pid: 80681 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80681 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 80681 ']' 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:03.750 11:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.010 [2024-12-06 11:52:01.574811] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:04.010 [2024-12-06 11:52:01.575055] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.010 [2024-12-06 11:52:01.719422] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.010 [2024-12-06 11:52:01.762741] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.270 [2024-12-06 11:52:01.803866] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.270 [2024-12-06 11:52:01.803979] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.840 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:04.840 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:04.840 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:04.840 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.840 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.840 [2024-12-06 11:52:02.384449] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:04.840 [2024-12-06 11:52:02.384563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:04.840 [2024-12-06 11:52:02.384594] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:04.840 [2024-12-06 11:52:02.384618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:04.840 [2024-12-06 11:52:02.384626] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:04.840 [2024-12-06 11:52:02.384638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:04.840 [2024-12-06 11:52:02.384644] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:04.840 [2024-12-06 11:52:02.384653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:04.840 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.840 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:04.840 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.840 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.840 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.840 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.840 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:04.840 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.840 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.840 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.840 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.840 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.840 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.840 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.840 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.840 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.840 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.840 "name": "Existed_Raid", 00:09:04.840 "uuid": "c276392f-5abd-425a-b7a5-0c77d9d3dd71", 00:09:04.840 "strip_size_kb": 64, 00:09:04.840 "state": "configuring", 00:09:04.840 "raid_level": "raid0", 00:09:04.840 "superblock": true, 00:09:04.840 "num_base_bdevs": 4, 00:09:04.840 "num_base_bdevs_discovered": 0, 00:09:04.840 "num_base_bdevs_operational": 4, 00:09:04.840 "base_bdevs_list": [ 00:09:04.840 { 00:09:04.840 "name": "BaseBdev1", 00:09:04.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.840 "is_configured": false, 00:09:04.840 "data_offset": 0, 00:09:04.840 "data_size": 0 00:09:04.840 }, 00:09:04.840 { 00:09:04.840 "name": "BaseBdev2", 00:09:04.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.840 "is_configured": false, 00:09:04.840 "data_offset": 0, 00:09:04.840 "data_size": 0 00:09:04.840 }, 00:09:04.840 { 00:09:04.840 "name": "BaseBdev3", 00:09:04.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.840 "is_configured": false, 00:09:04.840 "data_offset": 0, 00:09:04.840 "data_size": 0 00:09:04.840 }, 00:09:04.840 { 00:09:04.840 "name": "BaseBdev4", 00:09:04.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.840 "is_configured": false, 00:09:04.840 "data_offset": 0, 00:09:04.840 "data_size": 0 00:09:04.840 } 00:09:04.840 ] 00:09:04.840 }' 00:09:04.840 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.840 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.100 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.100 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.100 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.100 [2024-12-06 11:52:02.839544] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.100 [2024-12-06 11:52:02.839638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:05.100 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.100 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:05.100 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.100 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.100 [2024-12-06 11:52:02.851544] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:05.100 [2024-12-06 11:52:02.851628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:05.100 [2024-12-06 11:52:02.851653] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.100 [2024-12-06 11:52:02.851674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.100 [2024-12-06 11:52:02.851691] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:05.100 [2024-12-06 11:52:02.851710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:05.100 [2024-12-06 11:52:02.851727] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:05.100 [2024-12-06 11:52:02.851746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:05.100 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.100 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.361 [2024-12-06 11:52:02.872113] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.361 BaseBdev1 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.361 [ 00:09:05.361 { 00:09:05.361 "name": "BaseBdev1", 00:09:05.361 "aliases": [ 00:09:05.361 "4c36e2c9-fa1a-4bc3-a3f9-b3b866ec296e" 00:09:05.361 ], 00:09:05.361 "product_name": "Malloc disk", 00:09:05.361 "block_size": 512, 00:09:05.361 "num_blocks": 65536, 00:09:05.361 "uuid": "4c36e2c9-fa1a-4bc3-a3f9-b3b866ec296e", 00:09:05.361 "assigned_rate_limits": { 00:09:05.361 "rw_ios_per_sec": 0, 00:09:05.361 "rw_mbytes_per_sec": 0, 00:09:05.361 "r_mbytes_per_sec": 0, 00:09:05.361 "w_mbytes_per_sec": 0 00:09:05.361 }, 00:09:05.361 "claimed": true, 00:09:05.361 "claim_type": "exclusive_write", 00:09:05.361 "zoned": false, 00:09:05.361 "supported_io_types": { 00:09:05.361 "read": true, 00:09:05.361 "write": true, 00:09:05.361 "unmap": true, 00:09:05.361 "flush": true, 00:09:05.361 "reset": true, 00:09:05.361 "nvme_admin": false, 00:09:05.361 "nvme_io": false, 00:09:05.361 "nvme_io_md": false, 00:09:05.361 "write_zeroes": true, 00:09:05.361 "zcopy": true, 00:09:05.361 "get_zone_info": false, 00:09:05.361 "zone_management": false, 00:09:05.361 "zone_append": false, 00:09:05.361 "compare": false, 00:09:05.361 "compare_and_write": false, 00:09:05.361 "abort": true, 00:09:05.361 "seek_hole": false, 00:09:05.361 "seek_data": false, 00:09:05.361 "copy": true, 00:09:05.361 "nvme_iov_md": false 00:09:05.361 }, 00:09:05.361 "memory_domains": [ 00:09:05.361 { 00:09:05.361 "dma_device_id": "system", 00:09:05.361 "dma_device_type": 1 00:09:05.361 }, 00:09:05.361 { 00:09:05.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.361 "dma_device_type": 2 00:09:05.361 } 00:09:05.361 ], 00:09:05.361 "driver_specific": {} 00:09:05.361 } 00:09:05.361 ] 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.361 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.361 "name": "Existed_Raid", 00:09:05.361 "uuid": "cc41d78b-f5f7-41c8-9a98-31ca71604a1d", 00:09:05.361 "strip_size_kb": 64, 00:09:05.361 "state": "configuring", 00:09:05.361 "raid_level": "raid0", 00:09:05.361 "superblock": true, 00:09:05.361 "num_base_bdevs": 4, 00:09:05.361 "num_base_bdevs_discovered": 1, 00:09:05.361 "num_base_bdevs_operational": 4, 00:09:05.361 "base_bdevs_list": [ 00:09:05.361 { 00:09:05.361 "name": "BaseBdev1", 00:09:05.361 "uuid": "4c36e2c9-fa1a-4bc3-a3f9-b3b866ec296e", 00:09:05.361 "is_configured": true, 00:09:05.361 "data_offset": 2048, 00:09:05.361 "data_size": 63488 00:09:05.361 }, 00:09:05.361 { 00:09:05.362 "name": "BaseBdev2", 00:09:05.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.362 "is_configured": false, 00:09:05.362 "data_offset": 0, 00:09:05.362 "data_size": 0 00:09:05.362 }, 00:09:05.362 { 00:09:05.362 "name": "BaseBdev3", 00:09:05.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.362 "is_configured": false, 00:09:05.362 "data_offset": 0, 00:09:05.362 "data_size": 0 00:09:05.362 }, 00:09:05.362 { 00:09:05.362 "name": "BaseBdev4", 00:09:05.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.362 "is_configured": false, 00:09:05.362 "data_offset": 0, 00:09:05.362 "data_size": 0 00:09:05.362 } 00:09:05.362 ] 00:09:05.362 }' 00:09:05.362 11:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.362 11:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.622 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.622 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.622 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.622 [2024-12-06 11:52:03.227517] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.622 [2024-12-06 11:52:03.227603] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:05.622 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.622 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:05.622 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.622 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.622 [2024-12-06 11:52:03.239554] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.622 [2024-12-06 11:52:03.241364] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.622 [2024-12-06 11:52:03.241431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.622 [2024-12-06 11:52:03.241457] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:05.622 [2024-12-06 11:52:03.241478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:05.622 [2024-12-06 11:52:03.241495] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:05.622 [2024-12-06 11:52:03.241514] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:05.622 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.622 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:05.622 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:05.622 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:05.622 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.622 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.622 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.622 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.622 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:05.622 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.622 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.622 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.622 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.622 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.622 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.622 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.622 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.622 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.622 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.622 "name": "Existed_Raid", 00:09:05.622 "uuid": "2cb9569b-7c1a-4f12-a403-51972123a711", 00:09:05.622 "strip_size_kb": 64, 00:09:05.622 "state": "configuring", 00:09:05.622 "raid_level": "raid0", 00:09:05.622 "superblock": true, 00:09:05.622 "num_base_bdevs": 4, 00:09:05.622 "num_base_bdevs_discovered": 1, 00:09:05.622 "num_base_bdevs_operational": 4, 00:09:05.622 "base_bdevs_list": [ 00:09:05.622 { 00:09:05.622 "name": "BaseBdev1", 00:09:05.622 "uuid": "4c36e2c9-fa1a-4bc3-a3f9-b3b866ec296e", 00:09:05.622 "is_configured": true, 00:09:05.622 "data_offset": 2048, 00:09:05.622 "data_size": 63488 00:09:05.622 }, 00:09:05.622 { 00:09:05.622 "name": "BaseBdev2", 00:09:05.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.622 "is_configured": false, 00:09:05.622 "data_offset": 0, 00:09:05.622 "data_size": 0 00:09:05.622 }, 00:09:05.622 { 00:09:05.622 "name": "BaseBdev3", 00:09:05.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.622 "is_configured": false, 00:09:05.622 "data_offset": 0, 00:09:05.622 "data_size": 0 00:09:05.622 }, 00:09:05.622 { 00:09:05.622 "name": "BaseBdev4", 00:09:05.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.623 "is_configured": false, 00:09:05.623 "data_offset": 0, 00:09:05.623 "data_size": 0 00:09:05.623 } 00:09:05.623 ] 00:09:05.623 }' 00:09:05.623 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.623 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.193 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:06.193 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.193 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.193 BaseBdev2 00:09:06.193 [2024-12-06 11:52:03.739494] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:06.193 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.193 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:06.193 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:06.193 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:06.193 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:06.193 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:06.193 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:06.193 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:06.193 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.193 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.193 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.193 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:06.193 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.193 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.193 [ 00:09:06.193 { 00:09:06.193 "name": "BaseBdev2", 00:09:06.193 "aliases": [ 00:09:06.193 "c226f794-b46a-40a6-a5f9-ce4513cf3c99" 00:09:06.193 ], 00:09:06.193 "product_name": "Malloc disk", 00:09:06.193 "block_size": 512, 00:09:06.193 "num_blocks": 65536, 00:09:06.193 "uuid": "c226f794-b46a-40a6-a5f9-ce4513cf3c99", 00:09:06.193 "assigned_rate_limits": { 00:09:06.193 "rw_ios_per_sec": 0, 00:09:06.193 "rw_mbytes_per_sec": 0, 00:09:06.193 "r_mbytes_per_sec": 0, 00:09:06.193 "w_mbytes_per_sec": 0 00:09:06.193 }, 00:09:06.193 "claimed": true, 00:09:06.193 "claim_type": "exclusive_write", 00:09:06.193 "zoned": false, 00:09:06.193 "supported_io_types": { 00:09:06.193 "read": true, 00:09:06.193 "write": true, 00:09:06.193 "unmap": true, 00:09:06.193 "flush": true, 00:09:06.193 "reset": true, 00:09:06.193 "nvme_admin": false, 00:09:06.193 "nvme_io": false, 00:09:06.193 "nvme_io_md": false, 00:09:06.193 "write_zeroes": true, 00:09:06.193 "zcopy": true, 00:09:06.193 "get_zone_info": false, 00:09:06.193 "zone_management": false, 00:09:06.193 "zone_append": false, 00:09:06.193 "compare": false, 00:09:06.193 "compare_and_write": false, 00:09:06.193 "abort": true, 00:09:06.193 "seek_hole": false, 00:09:06.193 "seek_data": false, 00:09:06.193 "copy": true, 00:09:06.193 "nvme_iov_md": false 00:09:06.193 }, 00:09:06.193 "memory_domains": [ 00:09:06.193 { 00:09:06.193 "dma_device_id": "system", 00:09:06.193 "dma_device_type": 1 00:09:06.193 }, 00:09:06.193 { 00:09:06.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.193 "dma_device_type": 2 00:09:06.193 } 00:09:06.193 ], 00:09:06.193 "driver_specific": {} 00:09:06.193 } 00:09:06.193 ] 00:09:06.193 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.193 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:06.193 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:06.193 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:06.194 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:06.194 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.194 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.194 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.194 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.194 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:06.194 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.194 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.194 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.194 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.194 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.194 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.194 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.194 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.194 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.194 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.194 "name": "Existed_Raid", 00:09:06.194 "uuid": "2cb9569b-7c1a-4f12-a403-51972123a711", 00:09:06.194 "strip_size_kb": 64, 00:09:06.194 "state": "configuring", 00:09:06.194 "raid_level": "raid0", 00:09:06.194 "superblock": true, 00:09:06.194 "num_base_bdevs": 4, 00:09:06.194 "num_base_bdevs_discovered": 2, 00:09:06.194 "num_base_bdevs_operational": 4, 00:09:06.194 "base_bdevs_list": [ 00:09:06.194 { 00:09:06.194 "name": "BaseBdev1", 00:09:06.194 "uuid": "4c36e2c9-fa1a-4bc3-a3f9-b3b866ec296e", 00:09:06.194 "is_configured": true, 00:09:06.194 "data_offset": 2048, 00:09:06.194 "data_size": 63488 00:09:06.194 }, 00:09:06.194 { 00:09:06.194 "name": "BaseBdev2", 00:09:06.194 "uuid": "c226f794-b46a-40a6-a5f9-ce4513cf3c99", 00:09:06.194 "is_configured": true, 00:09:06.194 "data_offset": 2048, 00:09:06.194 "data_size": 63488 00:09:06.194 }, 00:09:06.194 { 00:09:06.194 "name": "BaseBdev3", 00:09:06.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.194 "is_configured": false, 00:09:06.194 "data_offset": 0, 00:09:06.194 "data_size": 0 00:09:06.194 }, 00:09:06.194 { 00:09:06.194 "name": "BaseBdev4", 00:09:06.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.194 "is_configured": false, 00:09:06.194 "data_offset": 0, 00:09:06.194 "data_size": 0 00:09:06.194 } 00:09:06.194 ] 00:09:06.194 }' 00:09:06.194 11:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.194 11:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.454 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:06.454 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.454 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.454 [2024-12-06 11:52:04.209639] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:06.454 BaseBdev3 00:09:06.715 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.715 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:06.715 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:06.715 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:06.715 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:06.715 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:06.715 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:06.715 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:06.715 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.715 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.715 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.715 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:06.715 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.715 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.715 [ 00:09:06.715 { 00:09:06.715 "name": "BaseBdev3", 00:09:06.715 "aliases": [ 00:09:06.715 "e31462b9-9a2e-40eb-948b-ea01311b0355" 00:09:06.715 ], 00:09:06.715 "product_name": "Malloc disk", 00:09:06.715 "block_size": 512, 00:09:06.715 "num_blocks": 65536, 00:09:06.715 "uuid": "e31462b9-9a2e-40eb-948b-ea01311b0355", 00:09:06.715 "assigned_rate_limits": { 00:09:06.715 "rw_ios_per_sec": 0, 00:09:06.715 "rw_mbytes_per_sec": 0, 00:09:06.715 "r_mbytes_per_sec": 0, 00:09:06.715 "w_mbytes_per_sec": 0 00:09:06.715 }, 00:09:06.715 "claimed": true, 00:09:06.715 "claim_type": "exclusive_write", 00:09:06.715 "zoned": false, 00:09:06.715 "supported_io_types": { 00:09:06.715 "read": true, 00:09:06.715 "write": true, 00:09:06.715 "unmap": true, 00:09:06.715 "flush": true, 00:09:06.715 "reset": true, 00:09:06.715 "nvme_admin": false, 00:09:06.715 "nvme_io": false, 00:09:06.715 "nvme_io_md": false, 00:09:06.715 "write_zeroes": true, 00:09:06.715 "zcopy": true, 00:09:06.715 "get_zone_info": false, 00:09:06.715 "zone_management": false, 00:09:06.715 "zone_append": false, 00:09:06.715 "compare": false, 00:09:06.715 "compare_and_write": false, 00:09:06.715 "abort": true, 00:09:06.715 "seek_hole": false, 00:09:06.715 "seek_data": false, 00:09:06.715 "copy": true, 00:09:06.715 "nvme_iov_md": false 00:09:06.715 }, 00:09:06.715 "memory_domains": [ 00:09:06.715 { 00:09:06.715 "dma_device_id": "system", 00:09:06.715 "dma_device_type": 1 00:09:06.715 }, 00:09:06.715 { 00:09:06.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.715 "dma_device_type": 2 00:09:06.715 } 00:09:06.715 ], 00:09:06.715 "driver_specific": {} 00:09:06.715 } 00:09:06.715 ] 00:09:06.715 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.715 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:06.715 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:06.715 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:06.715 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:06.715 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.715 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.715 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.715 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.715 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:06.715 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.715 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.716 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.716 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.716 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.716 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.716 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.716 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.716 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.716 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.716 "name": "Existed_Raid", 00:09:06.716 "uuid": "2cb9569b-7c1a-4f12-a403-51972123a711", 00:09:06.716 "strip_size_kb": 64, 00:09:06.716 "state": "configuring", 00:09:06.716 "raid_level": "raid0", 00:09:06.716 "superblock": true, 00:09:06.716 "num_base_bdevs": 4, 00:09:06.716 "num_base_bdevs_discovered": 3, 00:09:06.716 "num_base_bdevs_operational": 4, 00:09:06.716 "base_bdevs_list": [ 00:09:06.716 { 00:09:06.716 "name": "BaseBdev1", 00:09:06.716 "uuid": "4c36e2c9-fa1a-4bc3-a3f9-b3b866ec296e", 00:09:06.716 "is_configured": true, 00:09:06.716 "data_offset": 2048, 00:09:06.716 "data_size": 63488 00:09:06.716 }, 00:09:06.716 { 00:09:06.716 "name": "BaseBdev2", 00:09:06.716 "uuid": "c226f794-b46a-40a6-a5f9-ce4513cf3c99", 00:09:06.716 "is_configured": true, 00:09:06.716 "data_offset": 2048, 00:09:06.716 "data_size": 63488 00:09:06.716 }, 00:09:06.716 { 00:09:06.716 "name": "BaseBdev3", 00:09:06.716 "uuid": "e31462b9-9a2e-40eb-948b-ea01311b0355", 00:09:06.716 "is_configured": true, 00:09:06.716 "data_offset": 2048, 00:09:06.716 "data_size": 63488 00:09:06.716 }, 00:09:06.716 { 00:09:06.716 "name": "BaseBdev4", 00:09:06.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.716 "is_configured": false, 00:09:06.716 "data_offset": 0, 00:09:06.716 "data_size": 0 00:09:06.716 } 00:09:06.716 ] 00:09:06.716 }' 00:09:06.716 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.716 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.976 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:06.976 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.976 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.976 [2024-12-06 11:52:04.651733] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:06.976 [2024-12-06 11:52:04.652020] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:06.976 [2024-12-06 11:52:04.652082] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:06.976 [2024-12-06 11:52:04.652391] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:06.976 [2024-12-06 11:52:04.652543] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:06.976 BaseBdev4 00:09:06.976 [2024-12-06 11:52:04.652595] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:06.976 [2024-12-06 11:52:04.652724] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.977 [ 00:09:06.977 { 00:09:06.977 "name": "BaseBdev4", 00:09:06.977 "aliases": [ 00:09:06.977 "8ec46215-ddad-437b-b013-514b3af97f22" 00:09:06.977 ], 00:09:06.977 "product_name": "Malloc disk", 00:09:06.977 "block_size": 512, 00:09:06.977 "num_blocks": 65536, 00:09:06.977 "uuid": "8ec46215-ddad-437b-b013-514b3af97f22", 00:09:06.977 "assigned_rate_limits": { 00:09:06.977 "rw_ios_per_sec": 0, 00:09:06.977 "rw_mbytes_per_sec": 0, 00:09:06.977 "r_mbytes_per_sec": 0, 00:09:06.977 "w_mbytes_per_sec": 0 00:09:06.977 }, 00:09:06.977 "claimed": true, 00:09:06.977 "claim_type": "exclusive_write", 00:09:06.977 "zoned": false, 00:09:06.977 "supported_io_types": { 00:09:06.977 "read": true, 00:09:06.977 "write": true, 00:09:06.977 "unmap": true, 00:09:06.977 "flush": true, 00:09:06.977 "reset": true, 00:09:06.977 "nvme_admin": false, 00:09:06.977 "nvme_io": false, 00:09:06.977 "nvme_io_md": false, 00:09:06.977 "write_zeroes": true, 00:09:06.977 "zcopy": true, 00:09:06.977 "get_zone_info": false, 00:09:06.977 "zone_management": false, 00:09:06.977 "zone_append": false, 00:09:06.977 "compare": false, 00:09:06.977 "compare_and_write": false, 00:09:06.977 "abort": true, 00:09:06.977 "seek_hole": false, 00:09:06.977 "seek_data": false, 00:09:06.977 "copy": true, 00:09:06.977 "nvme_iov_md": false 00:09:06.977 }, 00:09:06.977 "memory_domains": [ 00:09:06.977 { 00:09:06.977 "dma_device_id": "system", 00:09:06.977 "dma_device_type": 1 00:09:06.977 }, 00:09:06.977 { 00:09:06.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.977 "dma_device_type": 2 00:09:06.977 } 00:09:06.977 ], 00:09:06.977 "driver_specific": {} 00:09:06.977 } 00:09:06.977 ] 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.977 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.237 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.237 "name": "Existed_Raid", 00:09:07.237 "uuid": "2cb9569b-7c1a-4f12-a403-51972123a711", 00:09:07.237 "strip_size_kb": 64, 00:09:07.237 "state": "online", 00:09:07.237 "raid_level": "raid0", 00:09:07.237 "superblock": true, 00:09:07.237 "num_base_bdevs": 4, 00:09:07.237 "num_base_bdevs_discovered": 4, 00:09:07.237 "num_base_bdevs_operational": 4, 00:09:07.237 "base_bdevs_list": [ 00:09:07.237 { 00:09:07.237 "name": "BaseBdev1", 00:09:07.237 "uuid": "4c36e2c9-fa1a-4bc3-a3f9-b3b866ec296e", 00:09:07.237 "is_configured": true, 00:09:07.237 "data_offset": 2048, 00:09:07.237 "data_size": 63488 00:09:07.237 }, 00:09:07.237 { 00:09:07.237 "name": "BaseBdev2", 00:09:07.237 "uuid": "c226f794-b46a-40a6-a5f9-ce4513cf3c99", 00:09:07.237 "is_configured": true, 00:09:07.237 "data_offset": 2048, 00:09:07.237 "data_size": 63488 00:09:07.237 }, 00:09:07.237 { 00:09:07.237 "name": "BaseBdev3", 00:09:07.237 "uuid": "e31462b9-9a2e-40eb-948b-ea01311b0355", 00:09:07.237 "is_configured": true, 00:09:07.237 "data_offset": 2048, 00:09:07.237 "data_size": 63488 00:09:07.237 }, 00:09:07.237 { 00:09:07.237 "name": "BaseBdev4", 00:09:07.237 "uuid": "8ec46215-ddad-437b-b013-514b3af97f22", 00:09:07.237 "is_configured": true, 00:09:07.237 "data_offset": 2048, 00:09:07.237 "data_size": 63488 00:09:07.237 } 00:09:07.237 ] 00:09:07.237 }' 00:09:07.237 11:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.237 11:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.496 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:07.496 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:07.496 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:07.496 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:07.496 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:07.496 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:07.496 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:07.497 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:07.497 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.497 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.497 [2024-12-06 11:52:05.107350] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:07.497 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.497 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:07.497 "name": "Existed_Raid", 00:09:07.497 "aliases": [ 00:09:07.497 "2cb9569b-7c1a-4f12-a403-51972123a711" 00:09:07.497 ], 00:09:07.497 "product_name": "Raid Volume", 00:09:07.497 "block_size": 512, 00:09:07.497 "num_blocks": 253952, 00:09:07.497 "uuid": "2cb9569b-7c1a-4f12-a403-51972123a711", 00:09:07.497 "assigned_rate_limits": { 00:09:07.497 "rw_ios_per_sec": 0, 00:09:07.497 "rw_mbytes_per_sec": 0, 00:09:07.497 "r_mbytes_per_sec": 0, 00:09:07.497 "w_mbytes_per_sec": 0 00:09:07.497 }, 00:09:07.497 "claimed": false, 00:09:07.497 "zoned": false, 00:09:07.497 "supported_io_types": { 00:09:07.497 "read": true, 00:09:07.497 "write": true, 00:09:07.497 "unmap": true, 00:09:07.497 "flush": true, 00:09:07.497 "reset": true, 00:09:07.497 "nvme_admin": false, 00:09:07.497 "nvme_io": false, 00:09:07.497 "nvme_io_md": false, 00:09:07.497 "write_zeroes": true, 00:09:07.497 "zcopy": false, 00:09:07.497 "get_zone_info": false, 00:09:07.497 "zone_management": false, 00:09:07.497 "zone_append": false, 00:09:07.497 "compare": false, 00:09:07.497 "compare_and_write": false, 00:09:07.497 "abort": false, 00:09:07.497 "seek_hole": false, 00:09:07.497 "seek_data": false, 00:09:07.497 "copy": false, 00:09:07.497 "nvme_iov_md": false 00:09:07.497 }, 00:09:07.497 "memory_domains": [ 00:09:07.497 { 00:09:07.497 "dma_device_id": "system", 00:09:07.497 "dma_device_type": 1 00:09:07.497 }, 00:09:07.497 { 00:09:07.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.497 "dma_device_type": 2 00:09:07.497 }, 00:09:07.497 { 00:09:07.497 "dma_device_id": "system", 00:09:07.497 "dma_device_type": 1 00:09:07.497 }, 00:09:07.497 { 00:09:07.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.497 "dma_device_type": 2 00:09:07.497 }, 00:09:07.497 { 00:09:07.497 "dma_device_id": "system", 00:09:07.497 "dma_device_type": 1 00:09:07.497 }, 00:09:07.497 { 00:09:07.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.497 "dma_device_type": 2 00:09:07.497 }, 00:09:07.497 { 00:09:07.497 "dma_device_id": "system", 00:09:07.497 "dma_device_type": 1 00:09:07.497 }, 00:09:07.497 { 00:09:07.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.497 "dma_device_type": 2 00:09:07.497 } 00:09:07.497 ], 00:09:07.497 "driver_specific": { 00:09:07.497 "raid": { 00:09:07.497 "uuid": "2cb9569b-7c1a-4f12-a403-51972123a711", 00:09:07.497 "strip_size_kb": 64, 00:09:07.497 "state": "online", 00:09:07.497 "raid_level": "raid0", 00:09:07.497 "superblock": true, 00:09:07.497 "num_base_bdevs": 4, 00:09:07.497 "num_base_bdevs_discovered": 4, 00:09:07.497 "num_base_bdevs_operational": 4, 00:09:07.497 "base_bdevs_list": [ 00:09:07.497 { 00:09:07.497 "name": "BaseBdev1", 00:09:07.497 "uuid": "4c36e2c9-fa1a-4bc3-a3f9-b3b866ec296e", 00:09:07.497 "is_configured": true, 00:09:07.497 "data_offset": 2048, 00:09:07.497 "data_size": 63488 00:09:07.497 }, 00:09:07.497 { 00:09:07.497 "name": "BaseBdev2", 00:09:07.497 "uuid": "c226f794-b46a-40a6-a5f9-ce4513cf3c99", 00:09:07.497 "is_configured": true, 00:09:07.497 "data_offset": 2048, 00:09:07.497 "data_size": 63488 00:09:07.497 }, 00:09:07.497 { 00:09:07.497 "name": "BaseBdev3", 00:09:07.497 "uuid": "e31462b9-9a2e-40eb-948b-ea01311b0355", 00:09:07.497 "is_configured": true, 00:09:07.497 "data_offset": 2048, 00:09:07.497 "data_size": 63488 00:09:07.497 }, 00:09:07.497 { 00:09:07.497 "name": "BaseBdev4", 00:09:07.497 "uuid": "8ec46215-ddad-437b-b013-514b3af97f22", 00:09:07.497 "is_configured": true, 00:09:07.497 "data_offset": 2048, 00:09:07.497 "data_size": 63488 00:09:07.497 } 00:09:07.497 ] 00:09:07.497 } 00:09:07.497 } 00:09:07.497 }' 00:09:07.497 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:07.497 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:07.497 BaseBdev2 00:09:07.497 BaseBdev3 00:09:07.497 BaseBdev4' 00:09:07.497 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.497 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:07.497 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.497 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:07.497 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.497 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.497 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.497 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.756 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.756 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.756 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.756 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:07.756 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.756 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.756 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.756 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.756 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.756 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.756 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.756 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:07.756 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.757 [2024-12-06 11:52:05.438526] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:07.757 [2024-12-06 11:52:05.438592] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:07.757 [2024-12-06 11:52:05.438692] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.757 "name": "Existed_Raid", 00:09:07.757 "uuid": "2cb9569b-7c1a-4f12-a403-51972123a711", 00:09:07.757 "strip_size_kb": 64, 00:09:07.757 "state": "offline", 00:09:07.757 "raid_level": "raid0", 00:09:07.757 "superblock": true, 00:09:07.757 "num_base_bdevs": 4, 00:09:07.757 "num_base_bdevs_discovered": 3, 00:09:07.757 "num_base_bdevs_operational": 3, 00:09:07.757 "base_bdevs_list": [ 00:09:07.757 { 00:09:07.757 "name": null, 00:09:07.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.757 "is_configured": false, 00:09:07.757 "data_offset": 0, 00:09:07.757 "data_size": 63488 00:09:07.757 }, 00:09:07.757 { 00:09:07.757 "name": "BaseBdev2", 00:09:07.757 "uuid": "c226f794-b46a-40a6-a5f9-ce4513cf3c99", 00:09:07.757 "is_configured": true, 00:09:07.757 "data_offset": 2048, 00:09:07.757 "data_size": 63488 00:09:07.757 }, 00:09:07.757 { 00:09:07.757 "name": "BaseBdev3", 00:09:07.757 "uuid": "e31462b9-9a2e-40eb-948b-ea01311b0355", 00:09:07.757 "is_configured": true, 00:09:07.757 "data_offset": 2048, 00:09:07.757 "data_size": 63488 00:09:07.757 }, 00:09:07.757 { 00:09:07.757 "name": "BaseBdev4", 00:09:07.757 "uuid": "8ec46215-ddad-437b-b013-514b3af97f22", 00:09:07.757 "is_configured": true, 00:09:07.757 "data_offset": 2048, 00:09:07.757 "data_size": 63488 00:09:07.757 } 00:09:07.757 ] 00:09:07.757 }' 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.757 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.327 [2024-12-06 11:52:05.932909] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.327 [2024-12-06 11:52:05.983886] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.327 11:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:08.327 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.327 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:08.327 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:08.327 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:08.327 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.327 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.327 [2024-12-06 11:52:06.054823] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:08.327 [2024-12-06 11:52:06.054917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:08.327 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.327 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:08.327 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.327 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.327 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.327 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:08.327 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.588 BaseBdev2 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.588 [ 00:09:08.588 { 00:09:08.588 "name": "BaseBdev2", 00:09:08.588 "aliases": [ 00:09:08.588 "5f7cc479-4b20-454e-ae73-0a6ed14359b7" 00:09:08.588 ], 00:09:08.588 "product_name": "Malloc disk", 00:09:08.588 "block_size": 512, 00:09:08.588 "num_blocks": 65536, 00:09:08.588 "uuid": "5f7cc479-4b20-454e-ae73-0a6ed14359b7", 00:09:08.588 "assigned_rate_limits": { 00:09:08.588 "rw_ios_per_sec": 0, 00:09:08.588 "rw_mbytes_per_sec": 0, 00:09:08.588 "r_mbytes_per_sec": 0, 00:09:08.588 "w_mbytes_per_sec": 0 00:09:08.588 }, 00:09:08.588 "claimed": false, 00:09:08.588 "zoned": false, 00:09:08.588 "supported_io_types": { 00:09:08.588 "read": true, 00:09:08.588 "write": true, 00:09:08.588 "unmap": true, 00:09:08.588 "flush": true, 00:09:08.588 "reset": true, 00:09:08.588 "nvme_admin": false, 00:09:08.588 "nvme_io": false, 00:09:08.588 "nvme_io_md": false, 00:09:08.588 "write_zeroes": true, 00:09:08.588 "zcopy": true, 00:09:08.588 "get_zone_info": false, 00:09:08.588 "zone_management": false, 00:09:08.588 "zone_append": false, 00:09:08.588 "compare": false, 00:09:08.588 "compare_and_write": false, 00:09:08.588 "abort": true, 00:09:08.588 "seek_hole": false, 00:09:08.588 "seek_data": false, 00:09:08.588 "copy": true, 00:09:08.588 "nvme_iov_md": false 00:09:08.588 }, 00:09:08.588 "memory_domains": [ 00:09:08.588 { 00:09:08.588 "dma_device_id": "system", 00:09:08.588 "dma_device_type": 1 00:09:08.588 }, 00:09:08.588 { 00:09:08.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.588 "dma_device_type": 2 00:09:08.588 } 00:09:08.588 ], 00:09:08.588 "driver_specific": {} 00:09:08.588 } 00:09:08.588 ] 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.588 BaseBdev3 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.588 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.588 [ 00:09:08.588 { 00:09:08.588 "name": "BaseBdev3", 00:09:08.588 "aliases": [ 00:09:08.588 "fb62114c-9625-48d1-9874-84eaf922e477" 00:09:08.588 ], 00:09:08.588 "product_name": "Malloc disk", 00:09:08.588 "block_size": 512, 00:09:08.588 "num_blocks": 65536, 00:09:08.589 "uuid": "fb62114c-9625-48d1-9874-84eaf922e477", 00:09:08.589 "assigned_rate_limits": { 00:09:08.589 "rw_ios_per_sec": 0, 00:09:08.589 "rw_mbytes_per_sec": 0, 00:09:08.589 "r_mbytes_per_sec": 0, 00:09:08.589 "w_mbytes_per_sec": 0 00:09:08.589 }, 00:09:08.589 "claimed": false, 00:09:08.589 "zoned": false, 00:09:08.589 "supported_io_types": { 00:09:08.589 "read": true, 00:09:08.589 "write": true, 00:09:08.589 "unmap": true, 00:09:08.589 "flush": true, 00:09:08.589 "reset": true, 00:09:08.589 "nvme_admin": false, 00:09:08.589 "nvme_io": false, 00:09:08.589 "nvme_io_md": false, 00:09:08.589 "write_zeroes": true, 00:09:08.589 "zcopy": true, 00:09:08.589 "get_zone_info": false, 00:09:08.589 "zone_management": false, 00:09:08.589 "zone_append": false, 00:09:08.589 "compare": false, 00:09:08.589 "compare_and_write": false, 00:09:08.589 "abort": true, 00:09:08.589 "seek_hole": false, 00:09:08.589 "seek_data": false, 00:09:08.589 "copy": true, 00:09:08.589 "nvme_iov_md": false 00:09:08.589 }, 00:09:08.589 "memory_domains": [ 00:09:08.589 { 00:09:08.589 "dma_device_id": "system", 00:09:08.589 "dma_device_type": 1 00:09:08.589 }, 00:09:08.589 { 00:09:08.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.589 "dma_device_type": 2 00:09:08.589 } 00:09:08.589 ], 00:09:08.589 "driver_specific": {} 00:09:08.589 } 00:09:08.589 ] 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.589 BaseBdev4 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.589 [ 00:09:08.589 { 00:09:08.589 "name": "BaseBdev4", 00:09:08.589 "aliases": [ 00:09:08.589 "9c171ad7-bef7-4830-bd32-e9fab760d459" 00:09:08.589 ], 00:09:08.589 "product_name": "Malloc disk", 00:09:08.589 "block_size": 512, 00:09:08.589 "num_blocks": 65536, 00:09:08.589 "uuid": "9c171ad7-bef7-4830-bd32-e9fab760d459", 00:09:08.589 "assigned_rate_limits": { 00:09:08.589 "rw_ios_per_sec": 0, 00:09:08.589 "rw_mbytes_per_sec": 0, 00:09:08.589 "r_mbytes_per_sec": 0, 00:09:08.589 "w_mbytes_per_sec": 0 00:09:08.589 }, 00:09:08.589 "claimed": false, 00:09:08.589 "zoned": false, 00:09:08.589 "supported_io_types": { 00:09:08.589 "read": true, 00:09:08.589 "write": true, 00:09:08.589 "unmap": true, 00:09:08.589 "flush": true, 00:09:08.589 "reset": true, 00:09:08.589 "nvme_admin": false, 00:09:08.589 "nvme_io": false, 00:09:08.589 "nvme_io_md": false, 00:09:08.589 "write_zeroes": true, 00:09:08.589 "zcopy": true, 00:09:08.589 "get_zone_info": false, 00:09:08.589 "zone_management": false, 00:09:08.589 "zone_append": false, 00:09:08.589 "compare": false, 00:09:08.589 "compare_and_write": false, 00:09:08.589 "abort": true, 00:09:08.589 "seek_hole": false, 00:09:08.589 "seek_data": false, 00:09:08.589 "copy": true, 00:09:08.589 "nvme_iov_md": false 00:09:08.589 }, 00:09:08.589 "memory_domains": [ 00:09:08.589 { 00:09:08.589 "dma_device_id": "system", 00:09:08.589 "dma_device_type": 1 00:09:08.589 }, 00:09:08.589 { 00:09:08.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.589 "dma_device_type": 2 00:09:08.589 } 00:09:08.589 ], 00:09:08.589 "driver_specific": {} 00:09:08.589 } 00:09:08.589 ] 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.589 [2024-12-06 11:52:06.281284] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.589 [2024-12-06 11:52:06.281364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.589 [2024-12-06 11:52:06.281403] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:08.589 [2024-12-06 11:52:06.283160] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:08.589 [2024-12-06 11:52:06.283258] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.589 "name": "Existed_Raid", 00:09:08.589 "uuid": "990ec9cc-d239-4319-9dfa-04261c139dfa", 00:09:08.589 "strip_size_kb": 64, 00:09:08.589 "state": "configuring", 00:09:08.589 "raid_level": "raid0", 00:09:08.589 "superblock": true, 00:09:08.589 "num_base_bdevs": 4, 00:09:08.589 "num_base_bdevs_discovered": 3, 00:09:08.589 "num_base_bdevs_operational": 4, 00:09:08.589 "base_bdevs_list": [ 00:09:08.589 { 00:09:08.589 "name": "BaseBdev1", 00:09:08.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.589 "is_configured": false, 00:09:08.589 "data_offset": 0, 00:09:08.589 "data_size": 0 00:09:08.589 }, 00:09:08.589 { 00:09:08.589 "name": "BaseBdev2", 00:09:08.589 "uuid": "5f7cc479-4b20-454e-ae73-0a6ed14359b7", 00:09:08.589 "is_configured": true, 00:09:08.589 "data_offset": 2048, 00:09:08.589 "data_size": 63488 00:09:08.589 }, 00:09:08.589 { 00:09:08.589 "name": "BaseBdev3", 00:09:08.589 "uuid": "fb62114c-9625-48d1-9874-84eaf922e477", 00:09:08.589 "is_configured": true, 00:09:08.589 "data_offset": 2048, 00:09:08.589 "data_size": 63488 00:09:08.589 }, 00:09:08.589 { 00:09:08.589 "name": "BaseBdev4", 00:09:08.589 "uuid": "9c171ad7-bef7-4830-bd32-e9fab760d459", 00:09:08.589 "is_configured": true, 00:09:08.589 "data_offset": 2048, 00:09:08.589 "data_size": 63488 00:09:08.589 } 00:09:08.589 ] 00:09:08.589 }' 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.589 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.159 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:09.159 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.159 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.159 [2024-12-06 11:52:06.712509] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:09.159 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.159 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:09.159 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.159 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.159 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.159 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.159 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:09.159 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.159 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.159 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.159 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.159 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.159 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.159 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.159 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.159 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.159 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.159 "name": "Existed_Raid", 00:09:09.159 "uuid": "990ec9cc-d239-4319-9dfa-04261c139dfa", 00:09:09.159 "strip_size_kb": 64, 00:09:09.159 "state": "configuring", 00:09:09.159 "raid_level": "raid0", 00:09:09.159 "superblock": true, 00:09:09.159 "num_base_bdevs": 4, 00:09:09.159 "num_base_bdevs_discovered": 2, 00:09:09.159 "num_base_bdevs_operational": 4, 00:09:09.159 "base_bdevs_list": [ 00:09:09.159 { 00:09:09.159 "name": "BaseBdev1", 00:09:09.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.159 "is_configured": false, 00:09:09.159 "data_offset": 0, 00:09:09.159 "data_size": 0 00:09:09.159 }, 00:09:09.159 { 00:09:09.159 "name": null, 00:09:09.159 "uuid": "5f7cc479-4b20-454e-ae73-0a6ed14359b7", 00:09:09.159 "is_configured": false, 00:09:09.159 "data_offset": 0, 00:09:09.159 "data_size": 63488 00:09:09.159 }, 00:09:09.159 { 00:09:09.159 "name": "BaseBdev3", 00:09:09.159 "uuid": "fb62114c-9625-48d1-9874-84eaf922e477", 00:09:09.159 "is_configured": true, 00:09:09.159 "data_offset": 2048, 00:09:09.159 "data_size": 63488 00:09:09.159 }, 00:09:09.159 { 00:09:09.159 "name": "BaseBdev4", 00:09:09.159 "uuid": "9c171ad7-bef7-4830-bd32-e9fab760d459", 00:09:09.159 "is_configured": true, 00:09:09.159 "data_offset": 2048, 00:09:09.159 "data_size": 63488 00:09:09.159 } 00:09:09.159 ] 00:09:09.159 }' 00:09:09.159 11:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.159 11:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.420 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:09.420 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.420 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.420 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.680 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.680 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:09.680 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:09.680 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.680 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.680 [2024-12-06 11:52:07.226492] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.680 BaseBdev1 00:09:09.680 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.680 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.681 [ 00:09:09.681 { 00:09:09.681 "name": "BaseBdev1", 00:09:09.681 "aliases": [ 00:09:09.681 "96568f95-bfbd-47b3-a520-2d85e9ed308a" 00:09:09.681 ], 00:09:09.681 "product_name": "Malloc disk", 00:09:09.681 "block_size": 512, 00:09:09.681 "num_blocks": 65536, 00:09:09.681 "uuid": "96568f95-bfbd-47b3-a520-2d85e9ed308a", 00:09:09.681 "assigned_rate_limits": { 00:09:09.681 "rw_ios_per_sec": 0, 00:09:09.681 "rw_mbytes_per_sec": 0, 00:09:09.681 "r_mbytes_per_sec": 0, 00:09:09.681 "w_mbytes_per_sec": 0 00:09:09.681 }, 00:09:09.681 "claimed": true, 00:09:09.681 "claim_type": "exclusive_write", 00:09:09.681 "zoned": false, 00:09:09.681 "supported_io_types": { 00:09:09.681 "read": true, 00:09:09.681 "write": true, 00:09:09.681 "unmap": true, 00:09:09.681 "flush": true, 00:09:09.681 "reset": true, 00:09:09.681 "nvme_admin": false, 00:09:09.681 "nvme_io": false, 00:09:09.681 "nvme_io_md": false, 00:09:09.681 "write_zeroes": true, 00:09:09.681 "zcopy": true, 00:09:09.681 "get_zone_info": false, 00:09:09.681 "zone_management": false, 00:09:09.681 "zone_append": false, 00:09:09.681 "compare": false, 00:09:09.681 "compare_and_write": false, 00:09:09.681 "abort": true, 00:09:09.681 "seek_hole": false, 00:09:09.681 "seek_data": false, 00:09:09.681 "copy": true, 00:09:09.681 "nvme_iov_md": false 00:09:09.681 }, 00:09:09.681 "memory_domains": [ 00:09:09.681 { 00:09:09.681 "dma_device_id": "system", 00:09:09.681 "dma_device_type": 1 00:09:09.681 }, 00:09:09.681 { 00:09:09.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.681 "dma_device_type": 2 00:09:09.681 } 00:09:09.681 ], 00:09:09.681 "driver_specific": {} 00:09:09.681 } 00:09:09.681 ] 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.681 "name": "Existed_Raid", 00:09:09.681 "uuid": "990ec9cc-d239-4319-9dfa-04261c139dfa", 00:09:09.681 "strip_size_kb": 64, 00:09:09.681 "state": "configuring", 00:09:09.681 "raid_level": "raid0", 00:09:09.681 "superblock": true, 00:09:09.681 "num_base_bdevs": 4, 00:09:09.681 "num_base_bdevs_discovered": 3, 00:09:09.681 "num_base_bdevs_operational": 4, 00:09:09.681 "base_bdevs_list": [ 00:09:09.681 { 00:09:09.681 "name": "BaseBdev1", 00:09:09.681 "uuid": "96568f95-bfbd-47b3-a520-2d85e9ed308a", 00:09:09.681 "is_configured": true, 00:09:09.681 "data_offset": 2048, 00:09:09.681 "data_size": 63488 00:09:09.681 }, 00:09:09.681 { 00:09:09.681 "name": null, 00:09:09.681 "uuid": "5f7cc479-4b20-454e-ae73-0a6ed14359b7", 00:09:09.681 "is_configured": false, 00:09:09.681 "data_offset": 0, 00:09:09.681 "data_size": 63488 00:09:09.681 }, 00:09:09.681 { 00:09:09.681 "name": "BaseBdev3", 00:09:09.681 "uuid": "fb62114c-9625-48d1-9874-84eaf922e477", 00:09:09.681 "is_configured": true, 00:09:09.681 "data_offset": 2048, 00:09:09.681 "data_size": 63488 00:09:09.681 }, 00:09:09.681 { 00:09:09.681 "name": "BaseBdev4", 00:09:09.681 "uuid": "9c171ad7-bef7-4830-bd32-e9fab760d459", 00:09:09.681 "is_configured": true, 00:09:09.681 "data_offset": 2048, 00:09:09.681 "data_size": 63488 00:09:09.681 } 00:09:09.681 ] 00:09:09.681 }' 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.681 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.942 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.942 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.942 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.942 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:09.942 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.942 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:09.942 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:09.942 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.942 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.942 [2024-12-06 11:52:07.693745] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:10.202 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.202 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:10.202 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.202 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.202 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.202 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.202 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:10.202 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.202 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.202 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.202 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.202 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.202 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.202 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.202 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.202 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.202 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.202 "name": "Existed_Raid", 00:09:10.202 "uuid": "990ec9cc-d239-4319-9dfa-04261c139dfa", 00:09:10.202 "strip_size_kb": 64, 00:09:10.202 "state": "configuring", 00:09:10.202 "raid_level": "raid0", 00:09:10.202 "superblock": true, 00:09:10.202 "num_base_bdevs": 4, 00:09:10.202 "num_base_bdevs_discovered": 2, 00:09:10.202 "num_base_bdevs_operational": 4, 00:09:10.202 "base_bdevs_list": [ 00:09:10.202 { 00:09:10.202 "name": "BaseBdev1", 00:09:10.202 "uuid": "96568f95-bfbd-47b3-a520-2d85e9ed308a", 00:09:10.202 "is_configured": true, 00:09:10.202 "data_offset": 2048, 00:09:10.202 "data_size": 63488 00:09:10.202 }, 00:09:10.202 { 00:09:10.202 "name": null, 00:09:10.202 "uuid": "5f7cc479-4b20-454e-ae73-0a6ed14359b7", 00:09:10.202 "is_configured": false, 00:09:10.202 "data_offset": 0, 00:09:10.202 "data_size": 63488 00:09:10.202 }, 00:09:10.202 { 00:09:10.202 "name": null, 00:09:10.202 "uuid": "fb62114c-9625-48d1-9874-84eaf922e477", 00:09:10.202 "is_configured": false, 00:09:10.202 "data_offset": 0, 00:09:10.202 "data_size": 63488 00:09:10.202 }, 00:09:10.202 { 00:09:10.202 "name": "BaseBdev4", 00:09:10.202 "uuid": "9c171ad7-bef7-4830-bd32-e9fab760d459", 00:09:10.202 "is_configured": true, 00:09:10.202 "data_offset": 2048, 00:09:10.202 "data_size": 63488 00:09:10.202 } 00:09:10.202 ] 00:09:10.202 }' 00:09:10.202 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.202 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.471 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.471 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:10.471 11:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.471 11:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.471 11:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.471 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:10.471 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:10.471 11:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.471 11:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.472 [2024-12-06 11:52:08.153024] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.472 11:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.472 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:10.472 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.472 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.472 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.472 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.472 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:10.472 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.472 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.472 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.472 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.472 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.472 11:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.472 11:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.472 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.472 11:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.472 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.472 "name": "Existed_Raid", 00:09:10.472 "uuid": "990ec9cc-d239-4319-9dfa-04261c139dfa", 00:09:10.472 "strip_size_kb": 64, 00:09:10.472 "state": "configuring", 00:09:10.472 "raid_level": "raid0", 00:09:10.472 "superblock": true, 00:09:10.472 "num_base_bdevs": 4, 00:09:10.472 "num_base_bdevs_discovered": 3, 00:09:10.472 "num_base_bdevs_operational": 4, 00:09:10.472 "base_bdevs_list": [ 00:09:10.472 { 00:09:10.472 "name": "BaseBdev1", 00:09:10.472 "uuid": "96568f95-bfbd-47b3-a520-2d85e9ed308a", 00:09:10.472 "is_configured": true, 00:09:10.472 "data_offset": 2048, 00:09:10.472 "data_size": 63488 00:09:10.472 }, 00:09:10.472 { 00:09:10.472 "name": null, 00:09:10.472 "uuid": "5f7cc479-4b20-454e-ae73-0a6ed14359b7", 00:09:10.472 "is_configured": false, 00:09:10.472 "data_offset": 0, 00:09:10.472 "data_size": 63488 00:09:10.472 }, 00:09:10.472 { 00:09:10.472 "name": "BaseBdev3", 00:09:10.472 "uuid": "fb62114c-9625-48d1-9874-84eaf922e477", 00:09:10.472 "is_configured": true, 00:09:10.472 "data_offset": 2048, 00:09:10.472 "data_size": 63488 00:09:10.472 }, 00:09:10.472 { 00:09:10.472 "name": "BaseBdev4", 00:09:10.472 "uuid": "9c171ad7-bef7-4830-bd32-e9fab760d459", 00:09:10.472 "is_configured": true, 00:09:10.472 "data_offset": 2048, 00:09:10.472 "data_size": 63488 00:09:10.472 } 00:09:10.472 ] 00:09:10.472 }' 00:09:10.472 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.472 11:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.074 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.074 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:11.074 11:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.074 11:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.074 11:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.074 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:11.074 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:11.074 11:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.074 11:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.074 [2024-12-06 11:52:08.592286] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:11.074 11:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.074 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:11.074 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.074 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.074 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.074 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.074 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:11.074 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.074 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.074 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.074 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.074 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.074 11:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.074 11:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.074 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.074 11:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.074 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.074 "name": "Existed_Raid", 00:09:11.074 "uuid": "990ec9cc-d239-4319-9dfa-04261c139dfa", 00:09:11.074 "strip_size_kb": 64, 00:09:11.074 "state": "configuring", 00:09:11.074 "raid_level": "raid0", 00:09:11.074 "superblock": true, 00:09:11.074 "num_base_bdevs": 4, 00:09:11.074 "num_base_bdevs_discovered": 2, 00:09:11.074 "num_base_bdevs_operational": 4, 00:09:11.074 "base_bdevs_list": [ 00:09:11.074 { 00:09:11.074 "name": null, 00:09:11.074 "uuid": "96568f95-bfbd-47b3-a520-2d85e9ed308a", 00:09:11.074 "is_configured": false, 00:09:11.074 "data_offset": 0, 00:09:11.074 "data_size": 63488 00:09:11.074 }, 00:09:11.074 { 00:09:11.074 "name": null, 00:09:11.074 "uuid": "5f7cc479-4b20-454e-ae73-0a6ed14359b7", 00:09:11.074 "is_configured": false, 00:09:11.074 "data_offset": 0, 00:09:11.074 "data_size": 63488 00:09:11.074 }, 00:09:11.074 { 00:09:11.074 "name": "BaseBdev3", 00:09:11.074 "uuid": "fb62114c-9625-48d1-9874-84eaf922e477", 00:09:11.074 "is_configured": true, 00:09:11.074 "data_offset": 2048, 00:09:11.074 "data_size": 63488 00:09:11.074 }, 00:09:11.074 { 00:09:11.074 "name": "BaseBdev4", 00:09:11.074 "uuid": "9c171ad7-bef7-4830-bd32-e9fab760d459", 00:09:11.074 "is_configured": true, 00:09:11.074 "data_offset": 2048, 00:09:11.074 "data_size": 63488 00:09:11.075 } 00:09:11.075 ] 00:09:11.075 }' 00:09:11.075 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.075 11:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.357 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.357 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.357 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.357 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:11.357 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.357 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:11.357 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:11.357 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.357 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.357 [2024-12-06 11:52:09.057843] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:11.357 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.357 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:11.357 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.357 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.357 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.357 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.357 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:11.357 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.357 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.357 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.357 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.357 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.357 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.357 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.357 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.357 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.631 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.631 "name": "Existed_Raid", 00:09:11.631 "uuid": "990ec9cc-d239-4319-9dfa-04261c139dfa", 00:09:11.631 "strip_size_kb": 64, 00:09:11.631 "state": "configuring", 00:09:11.631 "raid_level": "raid0", 00:09:11.631 "superblock": true, 00:09:11.631 "num_base_bdevs": 4, 00:09:11.631 "num_base_bdevs_discovered": 3, 00:09:11.631 "num_base_bdevs_operational": 4, 00:09:11.631 "base_bdevs_list": [ 00:09:11.631 { 00:09:11.631 "name": null, 00:09:11.631 "uuid": "96568f95-bfbd-47b3-a520-2d85e9ed308a", 00:09:11.631 "is_configured": false, 00:09:11.631 "data_offset": 0, 00:09:11.631 "data_size": 63488 00:09:11.631 }, 00:09:11.631 { 00:09:11.631 "name": "BaseBdev2", 00:09:11.631 "uuid": "5f7cc479-4b20-454e-ae73-0a6ed14359b7", 00:09:11.631 "is_configured": true, 00:09:11.631 "data_offset": 2048, 00:09:11.631 "data_size": 63488 00:09:11.631 }, 00:09:11.631 { 00:09:11.631 "name": "BaseBdev3", 00:09:11.631 "uuid": "fb62114c-9625-48d1-9874-84eaf922e477", 00:09:11.632 "is_configured": true, 00:09:11.632 "data_offset": 2048, 00:09:11.632 "data_size": 63488 00:09:11.632 }, 00:09:11.632 { 00:09:11.632 "name": "BaseBdev4", 00:09:11.632 "uuid": "9c171ad7-bef7-4830-bd32-e9fab760d459", 00:09:11.632 "is_configured": true, 00:09:11.632 "data_offset": 2048, 00:09:11.632 "data_size": 63488 00:09:11.632 } 00:09:11.632 ] 00:09:11.632 }' 00:09:11.632 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.632 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 96568f95-bfbd-47b3-a520-2d85e9ed308a 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.895 [2024-12-06 11:52:09.571741] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:11.895 [2024-12-06 11:52:09.571966] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:11.895 [2024-12-06 11:52:09.572011] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:11.895 [2024-12-06 11:52:09.572281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:09:11.895 [2024-12-06 11:52:09.572420] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:11.895 [2024-12-06 11:52:09.572459] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:11.895 NewBaseBdev 00:09:11.895 [2024-12-06 11:52:09.572584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.895 [ 00:09:11.895 { 00:09:11.895 "name": "NewBaseBdev", 00:09:11.895 "aliases": [ 00:09:11.895 "96568f95-bfbd-47b3-a520-2d85e9ed308a" 00:09:11.895 ], 00:09:11.895 "product_name": "Malloc disk", 00:09:11.895 "block_size": 512, 00:09:11.895 "num_blocks": 65536, 00:09:11.895 "uuid": "96568f95-bfbd-47b3-a520-2d85e9ed308a", 00:09:11.895 "assigned_rate_limits": { 00:09:11.895 "rw_ios_per_sec": 0, 00:09:11.895 "rw_mbytes_per_sec": 0, 00:09:11.895 "r_mbytes_per_sec": 0, 00:09:11.895 "w_mbytes_per_sec": 0 00:09:11.895 }, 00:09:11.895 "claimed": true, 00:09:11.895 "claim_type": "exclusive_write", 00:09:11.895 "zoned": false, 00:09:11.895 "supported_io_types": { 00:09:11.895 "read": true, 00:09:11.895 "write": true, 00:09:11.895 "unmap": true, 00:09:11.895 "flush": true, 00:09:11.895 "reset": true, 00:09:11.895 "nvme_admin": false, 00:09:11.895 "nvme_io": false, 00:09:11.895 "nvme_io_md": false, 00:09:11.895 "write_zeroes": true, 00:09:11.895 "zcopy": true, 00:09:11.895 "get_zone_info": false, 00:09:11.895 "zone_management": false, 00:09:11.895 "zone_append": false, 00:09:11.895 "compare": false, 00:09:11.895 "compare_and_write": false, 00:09:11.895 "abort": true, 00:09:11.895 "seek_hole": false, 00:09:11.895 "seek_data": false, 00:09:11.895 "copy": true, 00:09:11.895 "nvme_iov_md": false 00:09:11.895 }, 00:09:11.895 "memory_domains": [ 00:09:11.895 { 00:09:11.895 "dma_device_id": "system", 00:09:11.895 "dma_device_type": 1 00:09:11.895 }, 00:09:11.895 { 00:09:11.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.895 "dma_device_type": 2 00:09:11.895 } 00:09:11.895 ], 00:09:11.895 "driver_specific": {} 00:09:11.895 } 00:09:11.895 ] 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.895 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:11.896 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.896 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.896 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.896 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.896 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.896 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.896 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.896 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.896 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.155 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.155 "name": "Existed_Raid", 00:09:12.155 "uuid": "990ec9cc-d239-4319-9dfa-04261c139dfa", 00:09:12.155 "strip_size_kb": 64, 00:09:12.155 "state": "online", 00:09:12.155 "raid_level": "raid0", 00:09:12.155 "superblock": true, 00:09:12.155 "num_base_bdevs": 4, 00:09:12.155 "num_base_bdevs_discovered": 4, 00:09:12.155 "num_base_bdevs_operational": 4, 00:09:12.155 "base_bdevs_list": [ 00:09:12.155 { 00:09:12.155 "name": "NewBaseBdev", 00:09:12.155 "uuid": "96568f95-bfbd-47b3-a520-2d85e9ed308a", 00:09:12.155 "is_configured": true, 00:09:12.155 "data_offset": 2048, 00:09:12.155 "data_size": 63488 00:09:12.155 }, 00:09:12.155 { 00:09:12.155 "name": "BaseBdev2", 00:09:12.155 "uuid": "5f7cc479-4b20-454e-ae73-0a6ed14359b7", 00:09:12.155 "is_configured": true, 00:09:12.155 "data_offset": 2048, 00:09:12.155 "data_size": 63488 00:09:12.155 }, 00:09:12.155 { 00:09:12.155 "name": "BaseBdev3", 00:09:12.155 "uuid": "fb62114c-9625-48d1-9874-84eaf922e477", 00:09:12.155 "is_configured": true, 00:09:12.155 "data_offset": 2048, 00:09:12.155 "data_size": 63488 00:09:12.155 }, 00:09:12.155 { 00:09:12.155 "name": "BaseBdev4", 00:09:12.155 "uuid": "9c171ad7-bef7-4830-bd32-e9fab760d459", 00:09:12.155 "is_configured": true, 00:09:12.155 "data_offset": 2048, 00:09:12.155 "data_size": 63488 00:09:12.155 } 00:09:12.155 ] 00:09:12.155 }' 00:09:12.155 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.155 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.415 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:12.415 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:12.415 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:12.415 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:12.415 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:12.415 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:12.415 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:12.415 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:12.415 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.415 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.415 [2024-12-06 11:52:09.999341] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:12.415 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.415 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:12.415 "name": "Existed_Raid", 00:09:12.415 "aliases": [ 00:09:12.415 "990ec9cc-d239-4319-9dfa-04261c139dfa" 00:09:12.415 ], 00:09:12.415 "product_name": "Raid Volume", 00:09:12.415 "block_size": 512, 00:09:12.415 "num_blocks": 253952, 00:09:12.415 "uuid": "990ec9cc-d239-4319-9dfa-04261c139dfa", 00:09:12.415 "assigned_rate_limits": { 00:09:12.415 "rw_ios_per_sec": 0, 00:09:12.415 "rw_mbytes_per_sec": 0, 00:09:12.415 "r_mbytes_per_sec": 0, 00:09:12.415 "w_mbytes_per_sec": 0 00:09:12.415 }, 00:09:12.415 "claimed": false, 00:09:12.415 "zoned": false, 00:09:12.415 "supported_io_types": { 00:09:12.415 "read": true, 00:09:12.415 "write": true, 00:09:12.415 "unmap": true, 00:09:12.415 "flush": true, 00:09:12.415 "reset": true, 00:09:12.415 "nvme_admin": false, 00:09:12.415 "nvme_io": false, 00:09:12.415 "nvme_io_md": false, 00:09:12.415 "write_zeroes": true, 00:09:12.415 "zcopy": false, 00:09:12.415 "get_zone_info": false, 00:09:12.415 "zone_management": false, 00:09:12.415 "zone_append": false, 00:09:12.415 "compare": false, 00:09:12.415 "compare_and_write": false, 00:09:12.415 "abort": false, 00:09:12.415 "seek_hole": false, 00:09:12.415 "seek_data": false, 00:09:12.415 "copy": false, 00:09:12.415 "nvme_iov_md": false 00:09:12.415 }, 00:09:12.415 "memory_domains": [ 00:09:12.415 { 00:09:12.415 "dma_device_id": "system", 00:09:12.415 "dma_device_type": 1 00:09:12.415 }, 00:09:12.415 { 00:09:12.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.415 "dma_device_type": 2 00:09:12.415 }, 00:09:12.415 { 00:09:12.415 "dma_device_id": "system", 00:09:12.415 "dma_device_type": 1 00:09:12.415 }, 00:09:12.415 { 00:09:12.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.415 "dma_device_type": 2 00:09:12.415 }, 00:09:12.415 { 00:09:12.415 "dma_device_id": "system", 00:09:12.415 "dma_device_type": 1 00:09:12.415 }, 00:09:12.415 { 00:09:12.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.415 "dma_device_type": 2 00:09:12.415 }, 00:09:12.415 { 00:09:12.415 "dma_device_id": "system", 00:09:12.415 "dma_device_type": 1 00:09:12.415 }, 00:09:12.415 { 00:09:12.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.415 "dma_device_type": 2 00:09:12.415 } 00:09:12.415 ], 00:09:12.415 "driver_specific": { 00:09:12.415 "raid": { 00:09:12.415 "uuid": "990ec9cc-d239-4319-9dfa-04261c139dfa", 00:09:12.415 "strip_size_kb": 64, 00:09:12.415 "state": "online", 00:09:12.415 "raid_level": "raid0", 00:09:12.415 "superblock": true, 00:09:12.415 "num_base_bdevs": 4, 00:09:12.415 "num_base_bdevs_discovered": 4, 00:09:12.415 "num_base_bdevs_operational": 4, 00:09:12.415 "base_bdevs_list": [ 00:09:12.415 { 00:09:12.415 "name": "NewBaseBdev", 00:09:12.415 "uuid": "96568f95-bfbd-47b3-a520-2d85e9ed308a", 00:09:12.415 "is_configured": true, 00:09:12.415 "data_offset": 2048, 00:09:12.415 "data_size": 63488 00:09:12.415 }, 00:09:12.415 { 00:09:12.415 "name": "BaseBdev2", 00:09:12.415 "uuid": "5f7cc479-4b20-454e-ae73-0a6ed14359b7", 00:09:12.415 "is_configured": true, 00:09:12.415 "data_offset": 2048, 00:09:12.415 "data_size": 63488 00:09:12.415 }, 00:09:12.415 { 00:09:12.415 "name": "BaseBdev3", 00:09:12.415 "uuid": "fb62114c-9625-48d1-9874-84eaf922e477", 00:09:12.415 "is_configured": true, 00:09:12.415 "data_offset": 2048, 00:09:12.415 "data_size": 63488 00:09:12.415 }, 00:09:12.415 { 00:09:12.415 "name": "BaseBdev4", 00:09:12.415 "uuid": "9c171ad7-bef7-4830-bd32-e9fab760d459", 00:09:12.415 "is_configured": true, 00:09:12.415 "data_offset": 2048, 00:09:12.415 "data_size": 63488 00:09:12.415 } 00:09:12.415 ] 00:09:12.415 } 00:09:12.415 } 00:09:12.415 }' 00:09:12.415 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:12.415 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:12.415 BaseBdev2 00:09:12.415 BaseBdev3 00:09:12.415 BaseBdev4' 00:09:12.415 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.415 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:12.415 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.415 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.415 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:12.415 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.415 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.415 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.415 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.415 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.415 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.415 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.676 [2024-12-06 11:52:10.274541] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:12.676 [2024-12-06 11:52:10.274569] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.676 [2024-12-06 11:52:10.274631] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.676 [2024-12-06 11:52:10.274689] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.676 [2024-12-06 11:52:10.274704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80681 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 80681 ']' 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 80681 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80681 00:09:12.676 killing process with pid 80681 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80681' 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 80681 00:09:12.676 [2024-12-06 11:52:10.321024] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:12.676 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 80681 00:09:12.676 [2024-12-06 11:52:10.362135] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:12.936 ************************************ 00:09:12.936 END TEST raid_state_function_test_sb 00:09:12.936 ************************************ 00:09:12.936 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:12.936 00:09:12.936 real 0m9.119s 00:09:12.936 user 0m15.586s 00:09:12.936 sys 0m1.831s 00:09:12.936 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.936 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.936 11:52:10 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:09:12.936 11:52:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:12.936 11:52:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.936 11:52:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:12.936 ************************************ 00:09:12.936 START TEST raid_superblock_test 00:09:12.936 ************************************ 00:09:12.936 11:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:09:12.936 11:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:12.936 11:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:12.936 11:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:12.936 11:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:12.936 11:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:12.936 11:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:12.936 11:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:12.936 11:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:12.936 11:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:12.936 11:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:12.936 11:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:12.936 11:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:12.936 11:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:12.936 11:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:12.936 11:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:12.936 11:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:12.936 11:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81328 00:09:12.936 11:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:12.936 11:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81328 00:09:12.936 11:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81328 ']' 00:09:12.936 11:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.936 11:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:12.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.936 11:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.936 11:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:12.936 11:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.196 [2024-12-06 11:52:10.754180] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:13.196 [2024-12-06 11:52:10.754321] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81328 ] 00:09:13.196 [2024-12-06 11:52:10.899555] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.196 [2024-12-06 11:52:10.943470] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.456 [2024-12-06 11:52:10.985375] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.456 [2024-12-06 11:52:10.985416] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.026 malloc1 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.026 [2024-12-06 11:52:11.606689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:14.026 [2024-12-06 11:52:11.606757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.026 [2024-12-06 11:52:11.606783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:09:14.026 [2024-12-06 11:52:11.606796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.026 [2024-12-06 11:52:11.608838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.026 [2024-12-06 11:52:11.608876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:14.026 pt1 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.026 malloc2 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.026 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.026 [2024-12-06 11:52:11.652216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:14.026 [2024-12-06 11:52:11.652335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.026 [2024-12-06 11:52:11.652372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:14.027 [2024-12-06 11:52:11.652397] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.027 [2024-12-06 11:52:11.657095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.027 [2024-12-06 11:52:11.657173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:14.027 pt2 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.027 malloc3 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.027 [2024-12-06 11:52:11.683037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:14.027 [2024-12-06 11:52:11.683105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.027 [2024-12-06 11:52:11.683122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:14.027 [2024-12-06 11:52:11.683132] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.027 [2024-12-06 11:52:11.685156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.027 [2024-12-06 11:52:11.685196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:14.027 pt3 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.027 malloc4 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.027 [2024-12-06 11:52:11.711614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:14.027 [2024-12-06 11:52:11.711665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.027 [2024-12-06 11:52:11.711680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:14.027 [2024-12-06 11:52:11.711694] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.027 [2024-12-06 11:52:11.713733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.027 [2024-12-06 11:52:11.713769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:14.027 pt4 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.027 [2024-12-06 11:52:11.723652] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:14.027 [2024-12-06 11:52:11.725487] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:14.027 [2024-12-06 11:52:11.725552] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:14.027 [2024-12-06 11:52:11.725594] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:14.027 [2024-12-06 11:52:11.725738] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:09:14.027 [2024-12-06 11:52:11.725757] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:14.027 [2024-12-06 11:52:11.726000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:14.027 [2024-12-06 11:52:11.726148] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:09:14.027 [2024-12-06 11:52:11.726167] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:09:14.027 [2024-12-06 11:52:11.726299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.027 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.286 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.287 "name": "raid_bdev1", 00:09:14.287 "uuid": "de98fcdf-586f-42a5-b8b4-11cd98fff13c", 00:09:14.287 "strip_size_kb": 64, 00:09:14.287 "state": "online", 00:09:14.287 "raid_level": "raid0", 00:09:14.287 "superblock": true, 00:09:14.287 "num_base_bdevs": 4, 00:09:14.287 "num_base_bdevs_discovered": 4, 00:09:14.287 "num_base_bdevs_operational": 4, 00:09:14.287 "base_bdevs_list": [ 00:09:14.287 { 00:09:14.287 "name": "pt1", 00:09:14.287 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:14.287 "is_configured": true, 00:09:14.287 "data_offset": 2048, 00:09:14.287 "data_size": 63488 00:09:14.287 }, 00:09:14.287 { 00:09:14.287 "name": "pt2", 00:09:14.287 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:14.287 "is_configured": true, 00:09:14.287 "data_offset": 2048, 00:09:14.287 "data_size": 63488 00:09:14.287 }, 00:09:14.287 { 00:09:14.287 "name": "pt3", 00:09:14.287 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:14.287 "is_configured": true, 00:09:14.287 "data_offset": 2048, 00:09:14.287 "data_size": 63488 00:09:14.287 }, 00:09:14.287 { 00:09:14.287 "name": "pt4", 00:09:14.287 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:14.287 "is_configured": true, 00:09:14.287 "data_offset": 2048, 00:09:14.287 "data_size": 63488 00:09:14.287 } 00:09:14.287 ] 00:09:14.287 }' 00:09:14.287 11:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.287 11:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.551 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:14.551 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:14.551 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:14.551 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:14.551 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:14.551 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:14.551 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:14.551 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.551 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.551 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:14.551 [2024-12-06 11:52:12.155174] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:14.551 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.551 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:14.551 "name": "raid_bdev1", 00:09:14.551 "aliases": [ 00:09:14.551 "de98fcdf-586f-42a5-b8b4-11cd98fff13c" 00:09:14.551 ], 00:09:14.551 "product_name": "Raid Volume", 00:09:14.551 "block_size": 512, 00:09:14.551 "num_blocks": 253952, 00:09:14.551 "uuid": "de98fcdf-586f-42a5-b8b4-11cd98fff13c", 00:09:14.551 "assigned_rate_limits": { 00:09:14.551 "rw_ios_per_sec": 0, 00:09:14.551 "rw_mbytes_per_sec": 0, 00:09:14.551 "r_mbytes_per_sec": 0, 00:09:14.552 "w_mbytes_per_sec": 0 00:09:14.552 }, 00:09:14.552 "claimed": false, 00:09:14.552 "zoned": false, 00:09:14.552 "supported_io_types": { 00:09:14.552 "read": true, 00:09:14.552 "write": true, 00:09:14.552 "unmap": true, 00:09:14.552 "flush": true, 00:09:14.552 "reset": true, 00:09:14.552 "nvme_admin": false, 00:09:14.552 "nvme_io": false, 00:09:14.552 "nvme_io_md": false, 00:09:14.552 "write_zeroes": true, 00:09:14.552 "zcopy": false, 00:09:14.552 "get_zone_info": false, 00:09:14.552 "zone_management": false, 00:09:14.552 "zone_append": false, 00:09:14.552 "compare": false, 00:09:14.552 "compare_and_write": false, 00:09:14.552 "abort": false, 00:09:14.552 "seek_hole": false, 00:09:14.552 "seek_data": false, 00:09:14.552 "copy": false, 00:09:14.552 "nvme_iov_md": false 00:09:14.552 }, 00:09:14.552 "memory_domains": [ 00:09:14.552 { 00:09:14.552 "dma_device_id": "system", 00:09:14.552 "dma_device_type": 1 00:09:14.552 }, 00:09:14.552 { 00:09:14.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.552 "dma_device_type": 2 00:09:14.552 }, 00:09:14.552 { 00:09:14.552 "dma_device_id": "system", 00:09:14.552 "dma_device_type": 1 00:09:14.552 }, 00:09:14.552 { 00:09:14.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.552 "dma_device_type": 2 00:09:14.552 }, 00:09:14.552 { 00:09:14.552 "dma_device_id": "system", 00:09:14.552 "dma_device_type": 1 00:09:14.552 }, 00:09:14.552 { 00:09:14.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.552 "dma_device_type": 2 00:09:14.552 }, 00:09:14.552 { 00:09:14.552 "dma_device_id": "system", 00:09:14.552 "dma_device_type": 1 00:09:14.552 }, 00:09:14.552 { 00:09:14.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.552 "dma_device_type": 2 00:09:14.552 } 00:09:14.552 ], 00:09:14.552 "driver_specific": { 00:09:14.552 "raid": { 00:09:14.552 "uuid": "de98fcdf-586f-42a5-b8b4-11cd98fff13c", 00:09:14.552 "strip_size_kb": 64, 00:09:14.552 "state": "online", 00:09:14.552 "raid_level": "raid0", 00:09:14.552 "superblock": true, 00:09:14.552 "num_base_bdevs": 4, 00:09:14.552 "num_base_bdevs_discovered": 4, 00:09:14.552 "num_base_bdevs_operational": 4, 00:09:14.552 "base_bdevs_list": [ 00:09:14.552 { 00:09:14.552 "name": "pt1", 00:09:14.552 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:14.552 "is_configured": true, 00:09:14.552 "data_offset": 2048, 00:09:14.552 "data_size": 63488 00:09:14.552 }, 00:09:14.552 { 00:09:14.552 "name": "pt2", 00:09:14.552 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:14.552 "is_configured": true, 00:09:14.552 "data_offset": 2048, 00:09:14.552 "data_size": 63488 00:09:14.552 }, 00:09:14.552 { 00:09:14.552 "name": "pt3", 00:09:14.552 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:14.552 "is_configured": true, 00:09:14.552 "data_offset": 2048, 00:09:14.552 "data_size": 63488 00:09:14.552 }, 00:09:14.552 { 00:09:14.552 "name": "pt4", 00:09:14.552 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:14.552 "is_configured": true, 00:09:14.552 "data_offset": 2048, 00:09:14.552 "data_size": 63488 00:09:14.552 } 00:09:14.552 ] 00:09:14.552 } 00:09:14.552 } 00:09:14.552 }' 00:09:14.552 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:14.552 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:14.552 pt2 00:09:14.552 pt3 00:09:14.552 pt4' 00:09:14.552 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.552 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:14.552 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.552 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:14.552 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.552 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.552 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:14.811 [2024-12-06 11:52:12.446633] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=de98fcdf-586f-42a5-b8b4-11cd98fff13c 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z de98fcdf-586f-42a5-b8b4-11cd98fff13c ']' 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.811 [2024-12-06 11:52:12.494315] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:14.811 [2024-12-06 11:52:12.494345] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:14.811 [2024-12-06 11:52:12.494417] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:14.811 [2024-12-06 11:52:12.494484] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:14.811 [2024-12-06 11:52:12.494503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.811 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.070 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.070 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:15.070 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:15.070 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.070 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.070 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.070 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:15.070 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:09:15.070 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.070 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.070 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.070 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.071 [2024-12-06 11:52:12.658031] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:15.071 [2024-12-06 11:52:12.659877] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:15.071 [2024-12-06 11:52:12.659929] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:15.071 [2024-12-06 11:52:12.659963] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:15.071 [2024-12-06 11:52:12.660009] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:15.071 [2024-12-06 11:52:12.660048] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:15.071 [2024-12-06 11:52:12.660067] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:15.071 [2024-12-06 11:52:12.660082] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:09:15.071 [2024-12-06 11:52:12.660096] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:15.071 [2024-12-06 11:52:12.660104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:09:15.071 request: 00:09:15.071 { 00:09:15.071 "name": "raid_bdev1", 00:09:15.071 "raid_level": "raid0", 00:09:15.071 "base_bdevs": [ 00:09:15.071 "malloc1", 00:09:15.071 "malloc2", 00:09:15.071 "malloc3", 00:09:15.071 "malloc4" 00:09:15.071 ], 00:09:15.071 "strip_size_kb": 64, 00:09:15.071 "superblock": false, 00:09:15.071 "method": "bdev_raid_create", 00:09:15.071 "req_id": 1 00:09:15.071 } 00:09:15.071 Got JSON-RPC error response 00:09:15.071 response: 00:09:15.071 { 00:09:15.071 "code": -17, 00:09:15.071 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:15.071 } 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.071 [2024-12-06 11:52:12.721891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:15.071 [2024-12-06 11:52:12.721937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.071 [2024-12-06 11:52:12.721959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:15.071 [2024-12-06 11:52:12.721968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.071 [2024-12-06 11:52:12.724115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.071 [2024-12-06 11:52:12.724153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:15.071 [2024-12-06 11:52:12.724216] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:15.071 [2024-12-06 11:52:12.724256] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:15.071 pt1 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.071 "name": "raid_bdev1", 00:09:15.071 "uuid": "de98fcdf-586f-42a5-b8b4-11cd98fff13c", 00:09:15.071 "strip_size_kb": 64, 00:09:15.071 "state": "configuring", 00:09:15.071 "raid_level": "raid0", 00:09:15.071 "superblock": true, 00:09:15.071 "num_base_bdevs": 4, 00:09:15.071 "num_base_bdevs_discovered": 1, 00:09:15.071 "num_base_bdevs_operational": 4, 00:09:15.071 "base_bdevs_list": [ 00:09:15.071 { 00:09:15.071 "name": "pt1", 00:09:15.071 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:15.071 "is_configured": true, 00:09:15.071 "data_offset": 2048, 00:09:15.071 "data_size": 63488 00:09:15.071 }, 00:09:15.071 { 00:09:15.071 "name": null, 00:09:15.071 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:15.071 "is_configured": false, 00:09:15.071 "data_offset": 2048, 00:09:15.071 "data_size": 63488 00:09:15.071 }, 00:09:15.071 { 00:09:15.071 "name": null, 00:09:15.071 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:15.071 "is_configured": false, 00:09:15.071 "data_offset": 2048, 00:09:15.071 "data_size": 63488 00:09:15.071 }, 00:09:15.071 { 00:09:15.071 "name": null, 00:09:15.071 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:15.071 "is_configured": false, 00:09:15.071 "data_offset": 2048, 00:09:15.071 "data_size": 63488 00:09:15.071 } 00:09:15.071 ] 00:09:15.071 }' 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.071 11:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.640 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:09:15.640 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:15.640 11:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.640 11:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.640 [2024-12-06 11:52:13.169122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:15.640 [2024-12-06 11:52:13.169190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.640 [2024-12-06 11:52:13.169210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:15.640 [2024-12-06 11:52:13.169219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.640 [2024-12-06 11:52:13.169576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.640 [2024-12-06 11:52:13.169603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:15.640 [2024-12-06 11:52:13.169668] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:15.640 [2024-12-06 11:52:13.169699] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:15.640 pt2 00:09:15.640 11:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.640 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:15.640 11:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.640 11:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.640 [2024-12-06 11:52:13.177151] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:15.640 11:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.640 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:15.640 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:15.640 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.640 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.640 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.640 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:15.640 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.640 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.640 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.640 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.640 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.640 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:15.640 11:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.640 11:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.640 11:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.640 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.640 "name": "raid_bdev1", 00:09:15.640 "uuid": "de98fcdf-586f-42a5-b8b4-11cd98fff13c", 00:09:15.640 "strip_size_kb": 64, 00:09:15.640 "state": "configuring", 00:09:15.640 "raid_level": "raid0", 00:09:15.640 "superblock": true, 00:09:15.640 "num_base_bdevs": 4, 00:09:15.640 "num_base_bdevs_discovered": 1, 00:09:15.640 "num_base_bdevs_operational": 4, 00:09:15.640 "base_bdevs_list": [ 00:09:15.640 { 00:09:15.640 "name": "pt1", 00:09:15.640 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:15.640 "is_configured": true, 00:09:15.640 "data_offset": 2048, 00:09:15.640 "data_size": 63488 00:09:15.640 }, 00:09:15.640 { 00:09:15.640 "name": null, 00:09:15.640 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:15.640 "is_configured": false, 00:09:15.640 "data_offset": 0, 00:09:15.640 "data_size": 63488 00:09:15.640 }, 00:09:15.640 { 00:09:15.640 "name": null, 00:09:15.640 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:15.640 "is_configured": false, 00:09:15.640 "data_offset": 2048, 00:09:15.640 "data_size": 63488 00:09:15.640 }, 00:09:15.640 { 00:09:15.640 "name": null, 00:09:15.640 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:15.640 "is_configured": false, 00:09:15.640 "data_offset": 2048, 00:09:15.640 "data_size": 63488 00:09:15.640 } 00:09:15.640 ] 00:09:15.640 }' 00:09:15.640 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.640 11:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.901 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:15.901 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:15.901 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:15.901 11:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.901 11:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.901 [2024-12-06 11:52:13.620366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:15.901 [2024-12-06 11:52:13.620423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.901 [2024-12-06 11:52:13.620439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:15.901 [2024-12-06 11:52:13.620450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.901 [2024-12-06 11:52:13.620816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.901 [2024-12-06 11:52:13.620842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:15.901 [2024-12-06 11:52:13.620904] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:15.901 [2024-12-06 11:52:13.620929] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:15.901 pt2 00:09:15.901 11:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.901 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:15.901 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:15.901 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:15.901 11:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.901 11:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.901 [2024-12-06 11:52:13.632319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:15.901 [2024-12-06 11:52:13.632369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.901 [2024-12-06 11:52:13.632385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:15.901 [2024-12-06 11:52:13.632403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.901 [2024-12-06 11:52:13.632730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.901 [2024-12-06 11:52:13.632757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:15.901 [2024-12-06 11:52:13.632811] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:15.901 [2024-12-06 11:52:13.632831] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:15.901 pt3 00:09:15.901 11:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.901 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:15.901 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:15.901 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:15.901 11:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.901 11:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.902 [2024-12-06 11:52:13.644319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:15.902 [2024-12-06 11:52:13.644366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.902 [2024-12-06 11:52:13.644379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:15.902 [2024-12-06 11:52:13.644389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.902 [2024-12-06 11:52:13.644687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.902 [2024-12-06 11:52:13.644709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:15.902 [2024-12-06 11:52:13.644756] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:15.902 [2024-12-06 11:52:13.644775] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:15.902 [2024-12-06 11:52:13.644865] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:15.902 [2024-12-06 11:52:13.644875] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:15.902 [2024-12-06 11:52:13.645094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:09:15.902 [2024-12-06 11:52:13.645208] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:15.902 [2024-12-06 11:52:13.645219] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:09:15.902 [2024-12-06 11:52:13.645323] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.902 pt4 00:09:15.902 11:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.902 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:15.902 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:15.902 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:15.902 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:15.902 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.902 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.902 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.902 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:15.902 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.902 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.902 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.902 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.902 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:16.161 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.161 11:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.161 11:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.161 11:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.161 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.161 "name": "raid_bdev1", 00:09:16.161 "uuid": "de98fcdf-586f-42a5-b8b4-11cd98fff13c", 00:09:16.161 "strip_size_kb": 64, 00:09:16.161 "state": "online", 00:09:16.161 "raid_level": "raid0", 00:09:16.161 "superblock": true, 00:09:16.161 "num_base_bdevs": 4, 00:09:16.161 "num_base_bdevs_discovered": 4, 00:09:16.161 "num_base_bdevs_operational": 4, 00:09:16.161 "base_bdevs_list": [ 00:09:16.161 { 00:09:16.161 "name": "pt1", 00:09:16.161 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:16.161 "is_configured": true, 00:09:16.161 "data_offset": 2048, 00:09:16.161 "data_size": 63488 00:09:16.161 }, 00:09:16.161 { 00:09:16.161 "name": "pt2", 00:09:16.161 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:16.161 "is_configured": true, 00:09:16.161 "data_offset": 2048, 00:09:16.161 "data_size": 63488 00:09:16.161 }, 00:09:16.161 { 00:09:16.161 "name": "pt3", 00:09:16.161 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:16.161 "is_configured": true, 00:09:16.161 "data_offset": 2048, 00:09:16.161 "data_size": 63488 00:09:16.161 }, 00:09:16.161 { 00:09:16.161 "name": "pt4", 00:09:16.161 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:16.161 "is_configured": true, 00:09:16.161 "data_offset": 2048, 00:09:16.161 "data_size": 63488 00:09:16.161 } 00:09:16.161 ] 00:09:16.161 }' 00:09:16.161 11:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.161 11:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.420 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:16.420 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:16.420 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.420 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.420 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.420 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.420 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:16.420 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.420 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.420 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.420 [2024-12-06 11:52:14.075880] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.420 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.420 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.420 "name": "raid_bdev1", 00:09:16.420 "aliases": [ 00:09:16.420 "de98fcdf-586f-42a5-b8b4-11cd98fff13c" 00:09:16.420 ], 00:09:16.420 "product_name": "Raid Volume", 00:09:16.420 "block_size": 512, 00:09:16.420 "num_blocks": 253952, 00:09:16.420 "uuid": "de98fcdf-586f-42a5-b8b4-11cd98fff13c", 00:09:16.420 "assigned_rate_limits": { 00:09:16.420 "rw_ios_per_sec": 0, 00:09:16.420 "rw_mbytes_per_sec": 0, 00:09:16.420 "r_mbytes_per_sec": 0, 00:09:16.420 "w_mbytes_per_sec": 0 00:09:16.420 }, 00:09:16.420 "claimed": false, 00:09:16.420 "zoned": false, 00:09:16.420 "supported_io_types": { 00:09:16.420 "read": true, 00:09:16.420 "write": true, 00:09:16.420 "unmap": true, 00:09:16.420 "flush": true, 00:09:16.420 "reset": true, 00:09:16.420 "nvme_admin": false, 00:09:16.420 "nvme_io": false, 00:09:16.420 "nvme_io_md": false, 00:09:16.420 "write_zeroes": true, 00:09:16.420 "zcopy": false, 00:09:16.420 "get_zone_info": false, 00:09:16.420 "zone_management": false, 00:09:16.420 "zone_append": false, 00:09:16.420 "compare": false, 00:09:16.420 "compare_and_write": false, 00:09:16.420 "abort": false, 00:09:16.420 "seek_hole": false, 00:09:16.420 "seek_data": false, 00:09:16.420 "copy": false, 00:09:16.420 "nvme_iov_md": false 00:09:16.420 }, 00:09:16.420 "memory_domains": [ 00:09:16.420 { 00:09:16.420 "dma_device_id": "system", 00:09:16.420 "dma_device_type": 1 00:09:16.420 }, 00:09:16.420 { 00:09:16.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.420 "dma_device_type": 2 00:09:16.420 }, 00:09:16.420 { 00:09:16.420 "dma_device_id": "system", 00:09:16.420 "dma_device_type": 1 00:09:16.420 }, 00:09:16.420 { 00:09:16.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.421 "dma_device_type": 2 00:09:16.421 }, 00:09:16.421 { 00:09:16.421 "dma_device_id": "system", 00:09:16.421 "dma_device_type": 1 00:09:16.421 }, 00:09:16.421 { 00:09:16.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.421 "dma_device_type": 2 00:09:16.421 }, 00:09:16.421 { 00:09:16.421 "dma_device_id": "system", 00:09:16.421 "dma_device_type": 1 00:09:16.421 }, 00:09:16.421 { 00:09:16.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.421 "dma_device_type": 2 00:09:16.421 } 00:09:16.421 ], 00:09:16.421 "driver_specific": { 00:09:16.421 "raid": { 00:09:16.421 "uuid": "de98fcdf-586f-42a5-b8b4-11cd98fff13c", 00:09:16.421 "strip_size_kb": 64, 00:09:16.421 "state": "online", 00:09:16.421 "raid_level": "raid0", 00:09:16.421 "superblock": true, 00:09:16.421 "num_base_bdevs": 4, 00:09:16.421 "num_base_bdevs_discovered": 4, 00:09:16.421 "num_base_bdevs_operational": 4, 00:09:16.421 "base_bdevs_list": [ 00:09:16.421 { 00:09:16.421 "name": "pt1", 00:09:16.421 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:16.421 "is_configured": true, 00:09:16.421 "data_offset": 2048, 00:09:16.421 "data_size": 63488 00:09:16.421 }, 00:09:16.421 { 00:09:16.421 "name": "pt2", 00:09:16.421 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:16.421 "is_configured": true, 00:09:16.421 "data_offset": 2048, 00:09:16.421 "data_size": 63488 00:09:16.421 }, 00:09:16.421 { 00:09:16.421 "name": "pt3", 00:09:16.421 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:16.421 "is_configured": true, 00:09:16.421 "data_offset": 2048, 00:09:16.421 "data_size": 63488 00:09:16.421 }, 00:09:16.421 { 00:09:16.421 "name": "pt4", 00:09:16.421 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:16.421 "is_configured": true, 00:09:16.421 "data_offset": 2048, 00:09:16.421 "data_size": 63488 00:09:16.421 } 00:09:16.421 ] 00:09:16.421 } 00:09:16.421 } 00:09:16.421 }' 00:09:16.421 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.421 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:16.421 pt2 00:09:16.421 pt3 00:09:16.421 pt4' 00:09:16.421 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.680 [2024-12-06 11:52:14.367391] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' de98fcdf-586f-42a5-b8b4-11cd98fff13c '!=' de98fcdf-586f-42a5-b8b4-11cd98fff13c ']' 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81328 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81328 ']' 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81328 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:16.680 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81328 00:09:16.939 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:16.939 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:16.939 killing process with pid 81328 00:09:16.939 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81328' 00:09:16.939 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 81328 00:09:16.939 [2024-12-06 11:52:14.448543] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:16.939 [2024-12-06 11:52:14.448626] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.939 [2024-12-06 11:52:14.448686] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:16.939 [2024-12-06 11:52:14.448697] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:09:16.939 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 81328 00:09:16.939 [2024-12-06 11:52:14.492264] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:17.199 11:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:17.199 00:09:17.199 real 0m4.061s 00:09:17.199 user 0m6.423s 00:09:17.199 sys 0m0.863s 00:09:17.199 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:17.199 11:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.199 ************************************ 00:09:17.199 END TEST raid_superblock_test 00:09:17.199 ************************************ 00:09:17.199 11:52:14 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:09:17.199 11:52:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:17.199 11:52:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:17.199 11:52:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:17.199 ************************************ 00:09:17.199 START TEST raid_read_error_test 00:09:17.199 ************************************ 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.euh1Sa82RP 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81572 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81572 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 81572 ']' 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:17.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:17.199 11:52:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.199 [2024-12-06 11:52:14.908053] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:17.199 [2024-12-06 11:52:14.908196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81572 ] 00:09:17.458 [2024-12-06 11:52:15.051521] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.458 [2024-12-06 11:52:15.095965] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.458 [2024-12-06 11:52:15.137567] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.458 [2024-12-06 11:52:15.137604] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.024 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:18.024 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:18.024 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:18.024 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:18.024 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.024 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.024 BaseBdev1_malloc 00:09:18.024 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.024 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:18.024 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.024 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.282 true 00:09:18.282 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.282 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:18.282 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.282 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.282 [2024-12-06 11:52:15.790547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:18.282 [2024-12-06 11:52:15.790605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.282 [2024-12-06 11:52:15.790626] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:18.282 [2024-12-06 11:52:15.790635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.282 [2024-12-06 11:52:15.792780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.282 [2024-12-06 11:52:15.792818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:18.282 BaseBdev1 00:09:18.282 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.282 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:18.282 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:18.282 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.282 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.282 BaseBdev2_malloc 00:09:18.282 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.282 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:18.282 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.282 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.282 true 00:09:18.282 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.282 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:18.282 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.282 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.283 [2024-12-06 11:52:15.844219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:18.283 [2024-12-06 11:52:15.844311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.283 [2024-12-06 11:52:15.844341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:18.283 [2024-12-06 11:52:15.844355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.283 [2024-12-06 11:52:15.847442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.283 [2024-12-06 11:52:15.847491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:18.283 BaseBdev2 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.283 BaseBdev3_malloc 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.283 true 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.283 [2024-12-06 11:52:15.884878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:18.283 [2024-12-06 11:52:15.884936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.283 [2024-12-06 11:52:15.884953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:18.283 [2024-12-06 11:52:15.884961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.283 [2024-12-06 11:52:15.886942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.283 [2024-12-06 11:52:15.886974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:18.283 BaseBdev3 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.283 BaseBdev4_malloc 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.283 true 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.283 [2024-12-06 11:52:15.925192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:18.283 [2024-12-06 11:52:15.925262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.283 [2024-12-06 11:52:15.925281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:18.283 [2024-12-06 11:52:15.925290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.283 [2024-12-06 11:52:15.927245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.283 [2024-12-06 11:52:15.927310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:18.283 BaseBdev4 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.283 [2024-12-06 11:52:15.937234] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.283 [2024-12-06 11:52:15.939004] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:18.283 [2024-12-06 11:52:15.939091] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:18.283 [2024-12-06 11:52:15.939150] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:18.283 [2024-12-06 11:52:15.939366] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:09:18.283 [2024-12-06 11:52:15.939385] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:18.283 [2024-12-06 11:52:15.939621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:18.283 [2024-12-06 11:52:15.939772] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:09:18.283 [2024-12-06 11:52:15.939799] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:09:18.283 [2024-12-06 11:52:15.939908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.283 "name": "raid_bdev1", 00:09:18.283 "uuid": "b8459105-27ac-41c6-bd98-004f75c90da5", 00:09:18.283 "strip_size_kb": 64, 00:09:18.283 "state": "online", 00:09:18.283 "raid_level": "raid0", 00:09:18.283 "superblock": true, 00:09:18.283 "num_base_bdevs": 4, 00:09:18.283 "num_base_bdevs_discovered": 4, 00:09:18.283 "num_base_bdevs_operational": 4, 00:09:18.283 "base_bdevs_list": [ 00:09:18.283 { 00:09:18.283 "name": "BaseBdev1", 00:09:18.283 "uuid": "87844d46-127f-552f-bdce-93913806c01a", 00:09:18.283 "is_configured": true, 00:09:18.283 "data_offset": 2048, 00:09:18.283 "data_size": 63488 00:09:18.283 }, 00:09:18.283 { 00:09:18.283 "name": "BaseBdev2", 00:09:18.283 "uuid": "2f9086c3-a8ef-5480-af98-92fae9a43177", 00:09:18.283 "is_configured": true, 00:09:18.283 "data_offset": 2048, 00:09:18.283 "data_size": 63488 00:09:18.283 }, 00:09:18.283 { 00:09:18.283 "name": "BaseBdev3", 00:09:18.283 "uuid": "589de507-9d50-50e8-ba21-82fd429701a0", 00:09:18.283 "is_configured": true, 00:09:18.283 "data_offset": 2048, 00:09:18.283 "data_size": 63488 00:09:18.283 }, 00:09:18.283 { 00:09:18.283 "name": "BaseBdev4", 00:09:18.283 "uuid": "f064c156-f619-5bb6-a050-16d0c18f2690", 00:09:18.283 "is_configured": true, 00:09:18.283 "data_offset": 2048, 00:09:18.283 "data_size": 63488 00:09:18.283 } 00:09:18.283 ] 00:09:18.283 }' 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.283 11:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.852 11:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:18.852 11:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:18.852 [2024-12-06 11:52:16.548509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:09:19.792 11:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:19.792 11:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.792 11:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.792 11:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.792 11:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:19.792 11:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:19.792 11:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:19.792 11:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:19.792 11:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.792 11:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.792 11:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.792 11:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.792 11:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:19.792 11:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.792 11:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.792 11:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.792 11:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.792 11:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.792 11:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.792 11:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.792 11:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.792 11:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.792 11:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.792 "name": "raid_bdev1", 00:09:19.792 "uuid": "b8459105-27ac-41c6-bd98-004f75c90da5", 00:09:19.792 "strip_size_kb": 64, 00:09:19.792 "state": "online", 00:09:19.792 "raid_level": "raid0", 00:09:19.792 "superblock": true, 00:09:19.792 "num_base_bdevs": 4, 00:09:19.792 "num_base_bdevs_discovered": 4, 00:09:19.792 "num_base_bdevs_operational": 4, 00:09:19.792 "base_bdevs_list": [ 00:09:19.792 { 00:09:19.792 "name": "BaseBdev1", 00:09:19.792 "uuid": "87844d46-127f-552f-bdce-93913806c01a", 00:09:19.792 "is_configured": true, 00:09:19.792 "data_offset": 2048, 00:09:19.792 "data_size": 63488 00:09:19.792 }, 00:09:19.792 { 00:09:19.792 "name": "BaseBdev2", 00:09:19.792 "uuid": "2f9086c3-a8ef-5480-af98-92fae9a43177", 00:09:19.792 "is_configured": true, 00:09:19.792 "data_offset": 2048, 00:09:19.792 "data_size": 63488 00:09:19.792 }, 00:09:19.792 { 00:09:19.792 "name": "BaseBdev3", 00:09:19.792 "uuid": "589de507-9d50-50e8-ba21-82fd429701a0", 00:09:19.792 "is_configured": true, 00:09:19.792 "data_offset": 2048, 00:09:19.792 "data_size": 63488 00:09:19.792 }, 00:09:19.792 { 00:09:19.792 "name": "BaseBdev4", 00:09:19.793 "uuid": "f064c156-f619-5bb6-a050-16d0c18f2690", 00:09:19.793 "is_configured": true, 00:09:19.793 "data_offset": 2048, 00:09:19.793 "data_size": 63488 00:09:19.793 } 00:09:19.793 ] 00:09:19.793 }' 00:09:19.793 11:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.793 11:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.363 11:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:20.363 11:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.363 11:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.363 [2024-12-06 11:52:17.851555] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:20.363 [2024-12-06 11:52:17.851590] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.363 [2024-12-06 11:52:17.854121] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.363 [2024-12-06 11:52:17.854181] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.363 [2024-12-06 11:52:17.854228] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:20.363 [2024-12-06 11:52:17.854247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:09:20.363 { 00:09:20.363 "results": [ 00:09:20.363 { 00:09:20.363 "job": "raid_bdev1", 00:09:20.363 "core_mask": "0x1", 00:09:20.363 "workload": "randrw", 00:09:20.363 "percentage": 50, 00:09:20.363 "status": "finished", 00:09:20.363 "queue_depth": 1, 00:09:20.363 "io_size": 131072, 00:09:20.363 "runtime": 1.303781, 00:09:20.363 "iops": 17425.47252951224, 00:09:20.363 "mibps": 2178.18406618903, 00:09:20.363 "io_failed": 1, 00:09:20.363 "io_timeout": 0, 00:09:20.363 "avg_latency_us": 79.56840396088319, 00:09:20.363 "min_latency_us": 24.258515283842794, 00:09:20.363 "max_latency_us": 1359.3711790393013 00:09:20.363 } 00:09:20.363 ], 00:09:20.363 "core_count": 1 00:09:20.363 } 00:09:20.363 11:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.363 11:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81572 00:09:20.363 11:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 81572 ']' 00:09:20.363 11:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 81572 00:09:20.363 11:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:20.363 11:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:20.363 11:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81572 00:09:20.363 11:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:20.363 11:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:20.363 killing process with pid 81572 00:09:20.363 11:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81572' 00:09:20.363 11:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 81572 00:09:20.363 [2024-12-06 11:52:17.899979] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:20.363 11:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 81572 00:09:20.363 [2024-12-06 11:52:17.935459] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:20.623 11:52:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:20.623 11:52:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.euh1Sa82RP 00:09:20.623 11:52:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:20.623 11:52:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.77 00:09:20.623 11:52:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:20.623 11:52:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:20.623 11:52:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:20.623 11:52:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.77 != \0\.\0\0 ]] 00:09:20.623 00:09:20.623 real 0m3.362s 00:09:20.623 user 0m4.324s 00:09:20.623 sys 0m0.535s 00:09:20.623 11:52:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.623 11:52:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.623 ************************************ 00:09:20.623 END TEST raid_read_error_test 00:09:20.623 ************************************ 00:09:20.623 11:52:18 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:09:20.623 11:52:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:20.623 11:52:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.623 11:52:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:20.623 ************************************ 00:09:20.623 START TEST raid_write_error_test 00:09:20.623 ************************************ 00:09:20.623 11:52:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:09:20.623 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:20.623 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:20.623 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:20.623 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:20.623 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:20.623 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:20.623 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0c7iDFI7yx 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81706 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81706 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 81706 ']' 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:20.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:20.624 11:52:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.624 [2024-12-06 11:52:18.345230] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:20.624 [2024-12-06 11:52:18.345390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81706 ] 00:09:20.883 [2024-12-06 11:52:18.490335] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.883 [2024-12-06 11:52:18.533717] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.883 [2024-12-06 11:52:18.575229] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.883 [2024-12-06 11:52:18.575275] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.453 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:21.453 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:21.453 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:21.453 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:21.453 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.453 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.453 BaseBdev1_malloc 00:09:21.453 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.453 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:21.453 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.453 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.453 true 00:09:21.453 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.453 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:21.453 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.453 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.713 [2024-12-06 11:52:19.212661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:21.713 [2024-12-06 11:52:19.212730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.713 [2024-12-06 11:52:19.212756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:21.713 [2024-12-06 11:52:19.212767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.713 [2024-12-06 11:52:19.214886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.713 [2024-12-06 11:52:19.214923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:21.713 BaseBdev1 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.713 BaseBdev2_malloc 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.713 true 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.713 [2024-12-06 11:52:19.269602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:21.713 [2024-12-06 11:52:19.269653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.713 [2024-12-06 11:52:19.269672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:21.713 [2024-12-06 11:52:19.269681] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.713 [2024-12-06 11:52:19.271746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.713 [2024-12-06 11:52:19.271782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:21.713 BaseBdev2 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.713 BaseBdev3_malloc 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.713 true 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.713 [2024-12-06 11:52:19.310214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:21.713 [2024-12-06 11:52:19.310265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.713 [2024-12-06 11:52:19.310282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:21.713 [2024-12-06 11:52:19.310290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.713 [2024-12-06 11:52:19.312298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.713 [2024-12-06 11:52:19.312332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:21.713 BaseBdev3 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.713 BaseBdev4_malloc 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.713 true 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.713 [2024-12-06 11:52:19.350551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:21.713 [2024-12-06 11:52:19.350594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.713 [2024-12-06 11:52:19.350613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:21.713 [2024-12-06 11:52:19.350621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.713 [2024-12-06 11:52:19.352613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.713 [2024-12-06 11:52:19.352648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:21.713 BaseBdev4 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.713 [2024-12-06 11:52:19.362590] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:21.713 [2024-12-06 11:52:19.364374] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:21.713 [2024-12-06 11:52:19.364445] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:21.713 [2024-12-06 11:52:19.364504] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:21.713 [2024-12-06 11:52:19.364685] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:09:21.713 [2024-12-06 11:52:19.364719] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:21.713 [2024-12-06 11:52:19.364944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:21.713 [2024-12-06 11:52:19.365081] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:09:21.713 [2024-12-06 11:52:19.365100] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:09:21.713 [2024-12-06 11:52:19.365224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.713 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.713 "name": "raid_bdev1", 00:09:21.713 "uuid": "8bd4352a-f84c-49c9-bdf1-eed17745409b", 00:09:21.713 "strip_size_kb": 64, 00:09:21.713 "state": "online", 00:09:21.713 "raid_level": "raid0", 00:09:21.714 "superblock": true, 00:09:21.714 "num_base_bdevs": 4, 00:09:21.714 "num_base_bdevs_discovered": 4, 00:09:21.714 "num_base_bdevs_operational": 4, 00:09:21.714 "base_bdevs_list": [ 00:09:21.714 { 00:09:21.714 "name": "BaseBdev1", 00:09:21.714 "uuid": "5782c0b2-eb73-5ea3-a9ff-6498b2a17f28", 00:09:21.714 "is_configured": true, 00:09:21.714 "data_offset": 2048, 00:09:21.714 "data_size": 63488 00:09:21.714 }, 00:09:21.714 { 00:09:21.714 "name": "BaseBdev2", 00:09:21.714 "uuid": "73c91ef1-4da1-5985-a22b-bd527707a49b", 00:09:21.714 "is_configured": true, 00:09:21.714 "data_offset": 2048, 00:09:21.714 "data_size": 63488 00:09:21.714 }, 00:09:21.714 { 00:09:21.714 "name": "BaseBdev3", 00:09:21.714 "uuid": "2b3adccf-e227-5956-8b30-4584056da6a4", 00:09:21.714 "is_configured": true, 00:09:21.714 "data_offset": 2048, 00:09:21.714 "data_size": 63488 00:09:21.714 }, 00:09:21.714 { 00:09:21.714 "name": "BaseBdev4", 00:09:21.714 "uuid": "5c7b24c4-1f0e-51ba-a40f-2e7969c3186d", 00:09:21.714 "is_configured": true, 00:09:21.714 "data_offset": 2048, 00:09:21.714 "data_size": 63488 00:09:21.714 } 00:09:21.714 ] 00:09:21.714 }' 00:09:21.714 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.714 11:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.283 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:22.283 11:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:22.283 [2024-12-06 11:52:19.850077] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:09:23.223 11:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:23.223 11:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.223 11:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.223 11:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.223 11:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:23.223 11:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:23.223 11:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:23.223 11:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:23.223 11:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:23.223 11:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.223 11:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:23.223 11:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.223 11:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:23.223 11:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.223 11:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.223 11:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.223 11:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.223 11:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.223 11:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:23.223 11:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.223 11:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.223 11:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.223 11:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.223 "name": "raid_bdev1", 00:09:23.223 "uuid": "8bd4352a-f84c-49c9-bdf1-eed17745409b", 00:09:23.223 "strip_size_kb": 64, 00:09:23.223 "state": "online", 00:09:23.223 "raid_level": "raid0", 00:09:23.223 "superblock": true, 00:09:23.223 "num_base_bdevs": 4, 00:09:23.223 "num_base_bdevs_discovered": 4, 00:09:23.223 "num_base_bdevs_operational": 4, 00:09:23.223 "base_bdevs_list": [ 00:09:23.223 { 00:09:23.223 "name": "BaseBdev1", 00:09:23.223 "uuid": "5782c0b2-eb73-5ea3-a9ff-6498b2a17f28", 00:09:23.223 "is_configured": true, 00:09:23.223 "data_offset": 2048, 00:09:23.223 "data_size": 63488 00:09:23.223 }, 00:09:23.223 { 00:09:23.223 "name": "BaseBdev2", 00:09:23.223 "uuid": "73c91ef1-4da1-5985-a22b-bd527707a49b", 00:09:23.223 "is_configured": true, 00:09:23.223 "data_offset": 2048, 00:09:23.223 "data_size": 63488 00:09:23.223 }, 00:09:23.223 { 00:09:23.223 "name": "BaseBdev3", 00:09:23.223 "uuid": "2b3adccf-e227-5956-8b30-4584056da6a4", 00:09:23.223 "is_configured": true, 00:09:23.223 "data_offset": 2048, 00:09:23.223 "data_size": 63488 00:09:23.223 }, 00:09:23.223 { 00:09:23.223 "name": "BaseBdev4", 00:09:23.223 "uuid": "5c7b24c4-1f0e-51ba-a40f-2e7969c3186d", 00:09:23.223 "is_configured": true, 00:09:23.223 "data_offset": 2048, 00:09:23.223 "data_size": 63488 00:09:23.223 } 00:09:23.223 ] 00:09:23.223 }' 00:09:23.223 11:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.223 11:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.793 11:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:23.793 11:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.793 11:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.793 [2024-12-06 11:52:21.257961] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:23.793 [2024-12-06 11:52:21.257994] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:23.793 [2024-12-06 11:52:21.260411] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:23.793 [2024-12-06 11:52:21.260475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.793 [2024-12-06 11:52:21.260527] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:23.793 [2024-12-06 11:52:21.260542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:09:23.793 { 00:09:23.793 "results": [ 00:09:23.793 { 00:09:23.793 "job": "raid_bdev1", 00:09:23.793 "core_mask": "0x1", 00:09:23.793 "workload": "randrw", 00:09:23.793 "percentage": 50, 00:09:23.793 "status": "finished", 00:09:23.793 "queue_depth": 1, 00:09:23.793 "io_size": 131072, 00:09:23.793 "runtime": 1.408722, 00:09:23.793 "iops": 17442.76017553499, 00:09:23.793 "mibps": 2180.3450219418737, 00:09:23.793 "io_failed": 1, 00:09:23.793 "io_timeout": 0, 00:09:23.793 "avg_latency_us": 79.4389638785922, 00:09:23.793 "min_latency_us": 24.482096069868994, 00:09:23.793 "max_latency_us": 1359.3711790393013 00:09:23.793 } 00:09:23.793 ], 00:09:23.793 "core_count": 1 00:09:23.793 } 00:09:23.793 11:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.793 11:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81706 00:09:23.793 11:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 81706 ']' 00:09:23.793 11:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 81706 00:09:23.793 11:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:23.793 11:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:23.793 11:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81706 00:09:23.793 11:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:23.793 11:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:23.793 killing process with pid 81706 00:09:23.793 11:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81706' 00:09:23.793 11:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 81706 00:09:23.793 [2024-12-06 11:52:21.304206] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:23.793 11:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 81706 00:09:23.793 [2024-12-06 11:52:21.339017] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:24.054 11:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:24.054 11:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0c7iDFI7yx 00:09:24.054 11:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:24.054 11:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:24.054 11:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:24.054 11:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:24.054 11:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:24.054 11:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:24.054 00:09:24.054 real 0m3.329s 00:09:24.054 user 0m4.139s 00:09:24.054 sys 0m0.552s 00:09:24.054 11:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.054 11:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.054 ************************************ 00:09:24.054 END TEST raid_write_error_test 00:09:24.054 ************************************ 00:09:24.054 11:52:21 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:24.054 11:52:21 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:09:24.054 11:52:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:24.054 11:52:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.054 11:52:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:24.054 ************************************ 00:09:24.054 START TEST raid_state_function_test 00:09:24.054 ************************************ 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=81833 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:24.054 Process raid pid: 81833 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81833' 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 81833 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 81833 ']' 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:24.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:24.054 11:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.054 [2024-12-06 11:52:21.741240] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:24.054 [2024-12-06 11:52:21.741396] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.314 [2024-12-06 11:52:21.869253] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.314 [2024-12-06 11:52:21.911826] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.314 [2024-12-06 11:52:21.953102] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.314 [2024-12-06 11:52:21.953138] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.884 11:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:24.884 11:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:24.884 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:24.884 11:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.884 11:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.884 [2024-12-06 11:52:22.573509] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:24.884 [2024-12-06 11:52:22.573558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:24.884 [2024-12-06 11:52:22.573570] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:24.884 [2024-12-06 11:52:22.573579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:24.884 [2024-12-06 11:52:22.573586] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:24.884 [2024-12-06 11:52:22.573596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:24.884 [2024-12-06 11:52:22.573602] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:24.884 [2024-12-06 11:52:22.573611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:24.884 11:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.884 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:24.884 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.884 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.884 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.884 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.884 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:24.884 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.884 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.884 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.884 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.884 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.884 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.884 11:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.884 11:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.884 11:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.884 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.884 "name": "Existed_Raid", 00:09:24.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.884 "strip_size_kb": 64, 00:09:24.884 "state": "configuring", 00:09:24.884 "raid_level": "concat", 00:09:24.884 "superblock": false, 00:09:24.884 "num_base_bdevs": 4, 00:09:24.884 "num_base_bdevs_discovered": 0, 00:09:24.884 "num_base_bdevs_operational": 4, 00:09:24.884 "base_bdevs_list": [ 00:09:24.884 { 00:09:24.884 "name": "BaseBdev1", 00:09:24.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.884 "is_configured": false, 00:09:24.884 "data_offset": 0, 00:09:24.884 "data_size": 0 00:09:24.885 }, 00:09:24.885 { 00:09:24.885 "name": "BaseBdev2", 00:09:24.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.885 "is_configured": false, 00:09:24.885 "data_offset": 0, 00:09:24.885 "data_size": 0 00:09:24.885 }, 00:09:24.885 { 00:09:24.885 "name": "BaseBdev3", 00:09:24.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.885 "is_configured": false, 00:09:24.885 "data_offset": 0, 00:09:24.885 "data_size": 0 00:09:24.885 }, 00:09:24.885 { 00:09:24.885 "name": "BaseBdev4", 00:09:24.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.885 "is_configured": false, 00:09:24.885 "data_offset": 0, 00:09:24.885 "data_size": 0 00:09:24.885 } 00:09:24.885 ] 00:09:24.885 }' 00:09:24.885 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.885 11:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.519 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:25.519 11:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.519 11:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.519 [2024-12-06 11:52:22.992673] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:25.519 [2024-12-06 11:52:22.992721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:25.519 11:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.519 11:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:25.519 11:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.520 11:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.520 [2024-12-06 11:52:23.004676] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:25.520 [2024-12-06 11:52:23.004715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:25.520 [2024-12-06 11:52:23.004723] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:25.520 [2024-12-06 11:52:23.004732] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:25.520 [2024-12-06 11:52:23.004738] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:25.520 [2024-12-06 11:52:23.004746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:25.520 [2024-12-06 11:52:23.004752] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:25.520 [2024-12-06 11:52:23.004760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.520 [2024-12-06 11:52:23.025337] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:25.520 BaseBdev1 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.520 [ 00:09:25.520 { 00:09:25.520 "name": "BaseBdev1", 00:09:25.520 "aliases": [ 00:09:25.520 "8486f42f-ef7e-41aa-92ec-451d22d02f03" 00:09:25.520 ], 00:09:25.520 "product_name": "Malloc disk", 00:09:25.520 "block_size": 512, 00:09:25.520 "num_blocks": 65536, 00:09:25.520 "uuid": "8486f42f-ef7e-41aa-92ec-451d22d02f03", 00:09:25.520 "assigned_rate_limits": { 00:09:25.520 "rw_ios_per_sec": 0, 00:09:25.520 "rw_mbytes_per_sec": 0, 00:09:25.520 "r_mbytes_per_sec": 0, 00:09:25.520 "w_mbytes_per_sec": 0 00:09:25.520 }, 00:09:25.520 "claimed": true, 00:09:25.520 "claim_type": "exclusive_write", 00:09:25.520 "zoned": false, 00:09:25.520 "supported_io_types": { 00:09:25.520 "read": true, 00:09:25.520 "write": true, 00:09:25.520 "unmap": true, 00:09:25.520 "flush": true, 00:09:25.520 "reset": true, 00:09:25.520 "nvme_admin": false, 00:09:25.520 "nvme_io": false, 00:09:25.520 "nvme_io_md": false, 00:09:25.520 "write_zeroes": true, 00:09:25.520 "zcopy": true, 00:09:25.520 "get_zone_info": false, 00:09:25.520 "zone_management": false, 00:09:25.520 "zone_append": false, 00:09:25.520 "compare": false, 00:09:25.520 "compare_and_write": false, 00:09:25.520 "abort": true, 00:09:25.520 "seek_hole": false, 00:09:25.520 "seek_data": false, 00:09:25.520 "copy": true, 00:09:25.520 "nvme_iov_md": false 00:09:25.520 }, 00:09:25.520 "memory_domains": [ 00:09:25.520 { 00:09:25.520 "dma_device_id": "system", 00:09:25.520 "dma_device_type": 1 00:09:25.520 }, 00:09:25.520 { 00:09:25.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.520 "dma_device_type": 2 00:09:25.520 } 00:09:25.520 ], 00:09:25.520 "driver_specific": {} 00:09:25.520 } 00:09:25.520 ] 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.520 "name": "Existed_Raid", 00:09:25.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.520 "strip_size_kb": 64, 00:09:25.520 "state": "configuring", 00:09:25.520 "raid_level": "concat", 00:09:25.520 "superblock": false, 00:09:25.520 "num_base_bdevs": 4, 00:09:25.520 "num_base_bdevs_discovered": 1, 00:09:25.520 "num_base_bdevs_operational": 4, 00:09:25.520 "base_bdevs_list": [ 00:09:25.520 { 00:09:25.520 "name": "BaseBdev1", 00:09:25.520 "uuid": "8486f42f-ef7e-41aa-92ec-451d22d02f03", 00:09:25.520 "is_configured": true, 00:09:25.520 "data_offset": 0, 00:09:25.520 "data_size": 65536 00:09:25.520 }, 00:09:25.520 { 00:09:25.520 "name": "BaseBdev2", 00:09:25.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.520 "is_configured": false, 00:09:25.520 "data_offset": 0, 00:09:25.520 "data_size": 0 00:09:25.520 }, 00:09:25.520 { 00:09:25.520 "name": "BaseBdev3", 00:09:25.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.520 "is_configured": false, 00:09:25.520 "data_offset": 0, 00:09:25.520 "data_size": 0 00:09:25.520 }, 00:09:25.520 { 00:09:25.520 "name": "BaseBdev4", 00:09:25.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.520 "is_configured": false, 00:09:25.520 "data_offset": 0, 00:09:25.520 "data_size": 0 00:09:25.520 } 00:09:25.520 ] 00:09:25.520 }' 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.520 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.793 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:25.793 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.793 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.793 [2024-12-06 11:52:23.480576] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:25.793 [2024-12-06 11:52:23.480619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:25.793 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.793 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:25.793 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.793 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.793 [2024-12-06 11:52:23.492612] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:25.793 [2024-12-06 11:52:23.494400] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:25.793 [2024-12-06 11:52:23.494438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:25.793 [2024-12-06 11:52:23.494447] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:25.793 [2024-12-06 11:52:23.494455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:25.793 [2024-12-06 11:52:23.494462] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:25.793 [2024-12-06 11:52:23.494482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:25.794 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.794 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:25.794 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:25.794 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:25.794 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.794 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.794 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.794 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.794 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:25.794 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.794 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.794 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.794 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.794 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.794 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.794 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.794 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.794 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.794 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.794 "name": "Existed_Raid", 00:09:25.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.794 "strip_size_kb": 64, 00:09:25.794 "state": "configuring", 00:09:25.794 "raid_level": "concat", 00:09:25.794 "superblock": false, 00:09:25.794 "num_base_bdevs": 4, 00:09:25.794 "num_base_bdevs_discovered": 1, 00:09:25.794 "num_base_bdevs_operational": 4, 00:09:25.794 "base_bdevs_list": [ 00:09:25.794 { 00:09:25.794 "name": "BaseBdev1", 00:09:25.794 "uuid": "8486f42f-ef7e-41aa-92ec-451d22d02f03", 00:09:25.794 "is_configured": true, 00:09:25.794 "data_offset": 0, 00:09:25.794 "data_size": 65536 00:09:25.794 }, 00:09:25.794 { 00:09:25.794 "name": "BaseBdev2", 00:09:25.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.794 "is_configured": false, 00:09:25.794 "data_offset": 0, 00:09:25.794 "data_size": 0 00:09:25.794 }, 00:09:25.794 { 00:09:25.794 "name": "BaseBdev3", 00:09:25.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.794 "is_configured": false, 00:09:25.794 "data_offset": 0, 00:09:25.794 "data_size": 0 00:09:25.794 }, 00:09:25.794 { 00:09:25.794 "name": "BaseBdev4", 00:09:25.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.794 "is_configured": false, 00:09:25.794 "data_offset": 0, 00:09:25.794 "data_size": 0 00:09:25.794 } 00:09:25.794 ] 00:09:25.794 }' 00:09:25.794 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.794 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.364 [2024-12-06 11:52:23.940932] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:26.364 BaseBdev2 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.364 [ 00:09:26.364 { 00:09:26.364 "name": "BaseBdev2", 00:09:26.364 "aliases": [ 00:09:26.364 "19cc9cda-c1b6-4c40-be67-a33c0a73cc3e" 00:09:26.364 ], 00:09:26.364 "product_name": "Malloc disk", 00:09:26.364 "block_size": 512, 00:09:26.364 "num_blocks": 65536, 00:09:26.364 "uuid": "19cc9cda-c1b6-4c40-be67-a33c0a73cc3e", 00:09:26.364 "assigned_rate_limits": { 00:09:26.364 "rw_ios_per_sec": 0, 00:09:26.364 "rw_mbytes_per_sec": 0, 00:09:26.364 "r_mbytes_per_sec": 0, 00:09:26.364 "w_mbytes_per_sec": 0 00:09:26.364 }, 00:09:26.364 "claimed": true, 00:09:26.364 "claim_type": "exclusive_write", 00:09:26.364 "zoned": false, 00:09:26.364 "supported_io_types": { 00:09:26.364 "read": true, 00:09:26.364 "write": true, 00:09:26.364 "unmap": true, 00:09:26.364 "flush": true, 00:09:26.364 "reset": true, 00:09:26.364 "nvme_admin": false, 00:09:26.364 "nvme_io": false, 00:09:26.364 "nvme_io_md": false, 00:09:26.364 "write_zeroes": true, 00:09:26.364 "zcopy": true, 00:09:26.364 "get_zone_info": false, 00:09:26.364 "zone_management": false, 00:09:26.364 "zone_append": false, 00:09:26.364 "compare": false, 00:09:26.364 "compare_and_write": false, 00:09:26.364 "abort": true, 00:09:26.364 "seek_hole": false, 00:09:26.364 "seek_data": false, 00:09:26.364 "copy": true, 00:09:26.364 "nvme_iov_md": false 00:09:26.364 }, 00:09:26.364 "memory_domains": [ 00:09:26.364 { 00:09:26.364 "dma_device_id": "system", 00:09:26.364 "dma_device_type": 1 00:09:26.364 }, 00:09:26.364 { 00:09:26.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.364 "dma_device_type": 2 00:09:26.364 } 00:09:26.364 ], 00:09:26.364 "driver_specific": {} 00:09:26.364 } 00:09:26.364 ] 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.364 11:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.364 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.364 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.364 "name": "Existed_Raid", 00:09:26.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.364 "strip_size_kb": 64, 00:09:26.364 "state": "configuring", 00:09:26.364 "raid_level": "concat", 00:09:26.364 "superblock": false, 00:09:26.364 "num_base_bdevs": 4, 00:09:26.364 "num_base_bdevs_discovered": 2, 00:09:26.364 "num_base_bdevs_operational": 4, 00:09:26.364 "base_bdevs_list": [ 00:09:26.364 { 00:09:26.364 "name": "BaseBdev1", 00:09:26.364 "uuid": "8486f42f-ef7e-41aa-92ec-451d22d02f03", 00:09:26.364 "is_configured": true, 00:09:26.364 "data_offset": 0, 00:09:26.364 "data_size": 65536 00:09:26.364 }, 00:09:26.364 { 00:09:26.364 "name": "BaseBdev2", 00:09:26.364 "uuid": "19cc9cda-c1b6-4c40-be67-a33c0a73cc3e", 00:09:26.364 "is_configured": true, 00:09:26.364 "data_offset": 0, 00:09:26.364 "data_size": 65536 00:09:26.364 }, 00:09:26.364 { 00:09:26.364 "name": "BaseBdev3", 00:09:26.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.364 "is_configured": false, 00:09:26.364 "data_offset": 0, 00:09:26.364 "data_size": 0 00:09:26.364 }, 00:09:26.364 { 00:09:26.364 "name": "BaseBdev4", 00:09:26.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.364 "is_configured": false, 00:09:26.364 "data_offset": 0, 00:09:26.364 "data_size": 0 00:09:26.364 } 00:09:26.364 ] 00:09:26.364 }' 00:09:26.364 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.364 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.625 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:26.625 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.625 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.625 [2024-12-06 11:52:24.351083] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:26.625 BaseBdev3 00:09:26.625 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.625 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:26.625 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:26.625 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:26.625 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:26.625 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:26.625 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:26.625 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:26.625 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.625 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.625 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.625 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:26.625 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.625 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.625 [ 00:09:26.625 { 00:09:26.625 "name": "BaseBdev3", 00:09:26.625 "aliases": [ 00:09:26.625 "75a45b37-73ec-4cd1-bc64-dd5fea7c8955" 00:09:26.625 ], 00:09:26.625 "product_name": "Malloc disk", 00:09:26.625 "block_size": 512, 00:09:26.625 "num_blocks": 65536, 00:09:26.625 "uuid": "75a45b37-73ec-4cd1-bc64-dd5fea7c8955", 00:09:26.625 "assigned_rate_limits": { 00:09:26.625 "rw_ios_per_sec": 0, 00:09:26.625 "rw_mbytes_per_sec": 0, 00:09:26.625 "r_mbytes_per_sec": 0, 00:09:26.625 "w_mbytes_per_sec": 0 00:09:26.625 }, 00:09:26.625 "claimed": true, 00:09:26.625 "claim_type": "exclusive_write", 00:09:26.625 "zoned": false, 00:09:26.625 "supported_io_types": { 00:09:26.625 "read": true, 00:09:26.625 "write": true, 00:09:26.625 "unmap": true, 00:09:26.625 "flush": true, 00:09:26.625 "reset": true, 00:09:26.625 "nvme_admin": false, 00:09:26.625 "nvme_io": false, 00:09:26.885 "nvme_io_md": false, 00:09:26.885 "write_zeroes": true, 00:09:26.885 "zcopy": true, 00:09:26.885 "get_zone_info": false, 00:09:26.885 "zone_management": false, 00:09:26.885 "zone_append": false, 00:09:26.885 "compare": false, 00:09:26.885 "compare_and_write": false, 00:09:26.885 "abort": true, 00:09:26.885 "seek_hole": false, 00:09:26.885 "seek_data": false, 00:09:26.885 "copy": true, 00:09:26.885 "nvme_iov_md": false 00:09:26.885 }, 00:09:26.885 "memory_domains": [ 00:09:26.885 { 00:09:26.885 "dma_device_id": "system", 00:09:26.885 "dma_device_type": 1 00:09:26.885 }, 00:09:26.885 { 00:09:26.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.885 "dma_device_type": 2 00:09:26.885 } 00:09:26.885 ], 00:09:26.885 "driver_specific": {} 00:09:26.885 } 00:09:26.885 ] 00:09:26.886 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.886 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:26.886 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:26.886 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:26.886 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:26.886 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.886 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.886 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.886 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.886 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:26.886 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.886 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.886 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.886 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.886 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.886 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.886 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.886 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.886 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.886 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.886 "name": "Existed_Raid", 00:09:26.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.886 "strip_size_kb": 64, 00:09:26.886 "state": "configuring", 00:09:26.886 "raid_level": "concat", 00:09:26.886 "superblock": false, 00:09:26.886 "num_base_bdevs": 4, 00:09:26.886 "num_base_bdevs_discovered": 3, 00:09:26.886 "num_base_bdevs_operational": 4, 00:09:26.886 "base_bdevs_list": [ 00:09:26.886 { 00:09:26.886 "name": "BaseBdev1", 00:09:26.886 "uuid": "8486f42f-ef7e-41aa-92ec-451d22d02f03", 00:09:26.886 "is_configured": true, 00:09:26.886 "data_offset": 0, 00:09:26.886 "data_size": 65536 00:09:26.886 }, 00:09:26.886 { 00:09:26.886 "name": "BaseBdev2", 00:09:26.886 "uuid": "19cc9cda-c1b6-4c40-be67-a33c0a73cc3e", 00:09:26.886 "is_configured": true, 00:09:26.886 "data_offset": 0, 00:09:26.886 "data_size": 65536 00:09:26.886 }, 00:09:26.886 { 00:09:26.886 "name": "BaseBdev3", 00:09:26.886 "uuid": "75a45b37-73ec-4cd1-bc64-dd5fea7c8955", 00:09:26.886 "is_configured": true, 00:09:26.886 "data_offset": 0, 00:09:26.886 "data_size": 65536 00:09:26.886 }, 00:09:26.886 { 00:09:26.886 "name": "BaseBdev4", 00:09:26.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.886 "is_configured": false, 00:09:26.886 "data_offset": 0, 00:09:26.886 "data_size": 0 00:09:26.886 } 00:09:26.886 ] 00:09:26.886 }' 00:09:26.886 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.886 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.147 [2024-12-06 11:52:24.817228] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:27.147 [2024-12-06 11:52:24.817286] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:27.147 [2024-12-06 11:52:24.817295] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:27.147 [2024-12-06 11:52:24.817557] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:27.147 [2024-12-06 11:52:24.817692] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:27.147 [2024-12-06 11:52:24.817719] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:27.147 [2024-12-06 11:52:24.817902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.147 BaseBdev4 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.147 [ 00:09:27.147 { 00:09:27.147 "name": "BaseBdev4", 00:09:27.147 "aliases": [ 00:09:27.147 "2c5e4bf0-b822-48ff-afc3-2a96da7d1251" 00:09:27.147 ], 00:09:27.147 "product_name": "Malloc disk", 00:09:27.147 "block_size": 512, 00:09:27.147 "num_blocks": 65536, 00:09:27.147 "uuid": "2c5e4bf0-b822-48ff-afc3-2a96da7d1251", 00:09:27.147 "assigned_rate_limits": { 00:09:27.147 "rw_ios_per_sec": 0, 00:09:27.147 "rw_mbytes_per_sec": 0, 00:09:27.147 "r_mbytes_per_sec": 0, 00:09:27.147 "w_mbytes_per_sec": 0 00:09:27.147 }, 00:09:27.147 "claimed": true, 00:09:27.147 "claim_type": "exclusive_write", 00:09:27.147 "zoned": false, 00:09:27.147 "supported_io_types": { 00:09:27.147 "read": true, 00:09:27.147 "write": true, 00:09:27.147 "unmap": true, 00:09:27.147 "flush": true, 00:09:27.147 "reset": true, 00:09:27.147 "nvme_admin": false, 00:09:27.147 "nvme_io": false, 00:09:27.147 "nvme_io_md": false, 00:09:27.147 "write_zeroes": true, 00:09:27.147 "zcopy": true, 00:09:27.147 "get_zone_info": false, 00:09:27.147 "zone_management": false, 00:09:27.147 "zone_append": false, 00:09:27.147 "compare": false, 00:09:27.147 "compare_and_write": false, 00:09:27.147 "abort": true, 00:09:27.147 "seek_hole": false, 00:09:27.147 "seek_data": false, 00:09:27.147 "copy": true, 00:09:27.147 "nvme_iov_md": false 00:09:27.147 }, 00:09:27.147 "memory_domains": [ 00:09:27.147 { 00:09:27.147 "dma_device_id": "system", 00:09:27.147 "dma_device_type": 1 00:09:27.147 }, 00:09:27.147 { 00:09:27.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.147 "dma_device_type": 2 00:09:27.147 } 00:09:27.147 ], 00:09:27.147 "driver_specific": {} 00:09:27.147 } 00:09:27.147 ] 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.147 "name": "Existed_Raid", 00:09:27.147 "uuid": "21e12f2f-b496-4dee-a88c-4ae7791a5a21", 00:09:27.147 "strip_size_kb": 64, 00:09:27.147 "state": "online", 00:09:27.147 "raid_level": "concat", 00:09:27.147 "superblock": false, 00:09:27.147 "num_base_bdevs": 4, 00:09:27.147 "num_base_bdevs_discovered": 4, 00:09:27.147 "num_base_bdevs_operational": 4, 00:09:27.147 "base_bdevs_list": [ 00:09:27.147 { 00:09:27.147 "name": "BaseBdev1", 00:09:27.147 "uuid": "8486f42f-ef7e-41aa-92ec-451d22d02f03", 00:09:27.147 "is_configured": true, 00:09:27.147 "data_offset": 0, 00:09:27.147 "data_size": 65536 00:09:27.147 }, 00:09:27.147 { 00:09:27.147 "name": "BaseBdev2", 00:09:27.147 "uuid": "19cc9cda-c1b6-4c40-be67-a33c0a73cc3e", 00:09:27.147 "is_configured": true, 00:09:27.147 "data_offset": 0, 00:09:27.147 "data_size": 65536 00:09:27.147 }, 00:09:27.147 { 00:09:27.147 "name": "BaseBdev3", 00:09:27.147 "uuid": "75a45b37-73ec-4cd1-bc64-dd5fea7c8955", 00:09:27.147 "is_configured": true, 00:09:27.147 "data_offset": 0, 00:09:27.147 "data_size": 65536 00:09:27.147 }, 00:09:27.147 { 00:09:27.147 "name": "BaseBdev4", 00:09:27.147 "uuid": "2c5e4bf0-b822-48ff-afc3-2a96da7d1251", 00:09:27.147 "is_configured": true, 00:09:27.147 "data_offset": 0, 00:09:27.147 "data_size": 65536 00:09:27.147 } 00:09:27.147 ] 00:09:27.147 }' 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.147 11:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.717 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:27.717 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:27.717 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:27.717 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:27.717 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:27.717 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:27.717 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:27.717 11:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.717 11:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.717 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:27.717 [2024-12-06 11:52:25.304703] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:27.717 11:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.717 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:27.717 "name": "Existed_Raid", 00:09:27.717 "aliases": [ 00:09:27.717 "21e12f2f-b496-4dee-a88c-4ae7791a5a21" 00:09:27.717 ], 00:09:27.717 "product_name": "Raid Volume", 00:09:27.717 "block_size": 512, 00:09:27.717 "num_blocks": 262144, 00:09:27.717 "uuid": "21e12f2f-b496-4dee-a88c-4ae7791a5a21", 00:09:27.717 "assigned_rate_limits": { 00:09:27.717 "rw_ios_per_sec": 0, 00:09:27.717 "rw_mbytes_per_sec": 0, 00:09:27.717 "r_mbytes_per_sec": 0, 00:09:27.717 "w_mbytes_per_sec": 0 00:09:27.717 }, 00:09:27.717 "claimed": false, 00:09:27.717 "zoned": false, 00:09:27.717 "supported_io_types": { 00:09:27.717 "read": true, 00:09:27.717 "write": true, 00:09:27.717 "unmap": true, 00:09:27.717 "flush": true, 00:09:27.717 "reset": true, 00:09:27.717 "nvme_admin": false, 00:09:27.717 "nvme_io": false, 00:09:27.717 "nvme_io_md": false, 00:09:27.717 "write_zeroes": true, 00:09:27.717 "zcopy": false, 00:09:27.717 "get_zone_info": false, 00:09:27.717 "zone_management": false, 00:09:27.717 "zone_append": false, 00:09:27.717 "compare": false, 00:09:27.718 "compare_and_write": false, 00:09:27.718 "abort": false, 00:09:27.718 "seek_hole": false, 00:09:27.718 "seek_data": false, 00:09:27.718 "copy": false, 00:09:27.718 "nvme_iov_md": false 00:09:27.718 }, 00:09:27.718 "memory_domains": [ 00:09:27.718 { 00:09:27.718 "dma_device_id": "system", 00:09:27.718 "dma_device_type": 1 00:09:27.718 }, 00:09:27.718 { 00:09:27.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.718 "dma_device_type": 2 00:09:27.718 }, 00:09:27.718 { 00:09:27.718 "dma_device_id": "system", 00:09:27.718 "dma_device_type": 1 00:09:27.718 }, 00:09:27.718 { 00:09:27.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.718 "dma_device_type": 2 00:09:27.718 }, 00:09:27.718 { 00:09:27.718 "dma_device_id": "system", 00:09:27.718 "dma_device_type": 1 00:09:27.718 }, 00:09:27.718 { 00:09:27.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.718 "dma_device_type": 2 00:09:27.718 }, 00:09:27.718 { 00:09:27.718 "dma_device_id": "system", 00:09:27.718 "dma_device_type": 1 00:09:27.718 }, 00:09:27.718 { 00:09:27.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.718 "dma_device_type": 2 00:09:27.718 } 00:09:27.718 ], 00:09:27.718 "driver_specific": { 00:09:27.718 "raid": { 00:09:27.718 "uuid": "21e12f2f-b496-4dee-a88c-4ae7791a5a21", 00:09:27.718 "strip_size_kb": 64, 00:09:27.718 "state": "online", 00:09:27.718 "raid_level": "concat", 00:09:27.718 "superblock": false, 00:09:27.718 "num_base_bdevs": 4, 00:09:27.718 "num_base_bdevs_discovered": 4, 00:09:27.718 "num_base_bdevs_operational": 4, 00:09:27.718 "base_bdevs_list": [ 00:09:27.718 { 00:09:27.718 "name": "BaseBdev1", 00:09:27.718 "uuid": "8486f42f-ef7e-41aa-92ec-451d22d02f03", 00:09:27.718 "is_configured": true, 00:09:27.718 "data_offset": 0, 00:09:27.718 "data_size": 65536 00:09:27.718 }, 00:09:27.718 { 00:09:27.718 "name": "BaseBdev2", 00:09:27.718 "uuid": "19cc9cda-c1b6-4c40-be67-a33c0a73cc3e", 00:09:27.718 "is_configured": true, 00:09:27.718 "data_offset": 0, 00:09:27.718 "data_size": 65536 00:09:27.718 }, 00:09:27.718 { 00:09:27.718 "name": "BaseBdev3", 00:09:27.718 "uuid": "75a45b37-73ec-4cd1-bc64-dd5fea7c8955", 00:09:27.718 "is_configured": true, 00:09:27.718 "data_offset": 0, 00:09:27.718 "data_size": 65536 00:09:27.718 }, 00:09:27.718 { 00:09:27.718 "name": "BaseBdev4", 00:09:27.718 "uuid": "2c5e4bf0-b822-48ff-afc3-2a96da7d1251", 00:09:27.718 "is_configured": true, 00:09:27.718 "data_offset": 0, 00:09:27.718 "data_size": 65536 00:09:27.718 } 00:09:27.718 ] 00:09:27.718 } 00:09:27.718 } 00:09:27.718 }' 00:09:27.718 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:27.718 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:27.718 BaseBdev2 00:09:27.718 BaseBdev3 00:09:27.718 BaseBdev4' 00:09:27.718 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.718 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:27.718 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.718 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:27.718 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.718 11:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.718 11:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.718 11:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.718 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.718 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.718 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.718 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:27.718 11:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.718 11:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.718 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.718 11:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.978 [2024-12-06 11:52:25.591950] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:27.978 [2024-12-06 11:52:25.591979] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:27.978 [2024-12-06 11:52:25.592026] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.978 "name": "Existed_Raid", 00:09:27.978 "uuid": "21e12f2f-b496-4dee-a88c-4ae7791a5a21", 00:09:27.978 "strip_size_kb": 64, 00:09:27.978 "state": "offline", 00:09:27.978 "raid_level": "concat", 00:09:27.978 "superblock": false, 00:09:27.978 "num_base_bdevs": 4, 00:09:27.978 "num_base_bdevs_discovered": 3, 00:09:27.978 "num_base_bdevs_operational": 3, 00:09:27.978 "base_bdevs_list": [ 00:09:27.978 { 00:09:27.978 "name": null, 00:09:27.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.978 "is_configured": false, 00:09:27.978 "data_offset": 0, 00:09:27.978 "data_size": 65536 00:09:27.978 }, 00:09:27.978 { 00:09:27.978 "name": "BaseBdev2", 00:09:27.978 "uuid": "19cc9cda-c1b6-4c40-be67-a33c0a73cc3e", 00:09:27.978 "is_configured": true, 00:09:27.978 "data_offset": 0, 00:09:27.978 "data_size": 65536 00:09:27.978 }, 00:09:27.978 { 00:09:27.978 "name": "BaseBdev3", 00:09:27.978 "uuid": "75a45b37-73ec-4cd1-bc64-dd5fea7c8955", 00:09:27.978 "is_configured": true, 00:09:27.978 "data_offset": 0, 00:09:27.978 "data_size": 65536 00:09:27.978 }, 00:09:27.978 { 00:09:27.978 "name": "BaseBdev4", 00:09:27.978 "uuid": "2c5e4bf0-b822-48ff-afc3-2a96da7d1251", 00:09:27.978 "is_configured": true, 00:09:27.978 "data_offset": 0, 00:09:27.978 "data_size": 65536 00:09:27.978 } 00:09:27.978 ] 00:09:27.978 }' 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.978 11:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.548 [2024-12-06 11:52:26.078207] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.548 [2024-12-06 11:52:26.145088] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.548 [2024-12-06 11:52:26.192003] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:28.548 [2024-12-06 11:52:26.192046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.548 BaseBdev2 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.548 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.548 [ 00:09:28.548 { 00:09:28.548 "name": "BaseBdev2", 00:09:28.548 "aliases": [ 00:09:28.548 "6a6fa117-6a63-4e7a-95c9-a98e770835f7" 00:09:28.548 ], 00:09:28.548 "product_name": "Malloc disk", 00:09:28.548 "block_size": 512, 00:09:28.548 "num_blocks": 65536, 00:09:28.548 "uuid": "6a6fa117-6a63-4e7a-95c9-a98e770835f7", 00:09:28.548 "assigned_rate_limits": { 00:09:28.548 "rw_ios_per_sec": 0, 00:09:28.548 "rw_mbytes_per_sec": 0, 00:09:28.548 "r_mbytes_per_sec": 0, 00:09:28.549 "w_mbytes_per_sec": 0 00:09:28.549 }, 00:09:28.549 "claimed": false, 00:09:28.549 "zoned": false, 00:09:28.549 "supported_io_types": { 00:09:28.549 "read": true, 00:09:28.549 "write": true, 00:09:28.549 "unmap": true, 00:09:28.549 "flush": true, 00:09:28.549 "reset": true, 00:09:28.549 "nvme_admin": false, 00:09:28.549 "nvme_io": false, 00:09:28.549 "nvme_io_md": false, 00:09:28.549 "write_zeroes": true, 00:09:28.549 "zcopy": true, 00:09:28.549 "get_zone_info": false, 00:09:28.549 "zone_management": false, 00:09:28.549 "zone_append": false, 00:09:28.549 "compare": false, 00:09:28.549 "compare_and_write": false, 00:09:28.549 "abort": true, 00:09:28.549 "seek_hole": false, 00:09:28.549 "seek_data": false, 00:09:28.549 "copy": true, 00:09:28.549 "nvme_iov_md": false 00:09:28.549 }, 00:09:28.549 "memory_domains": [ 00:09:28.549 { 00:09:28.549 "dma_device_id": "system", 00:09:28.549 "dma_device_type": 1 00:09:28.549 }, 00:09:28.549 { 00:09:28.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.549 "dma_device_type": 2 00:09:28.549 } 00:09:28.549 ], 00:09:28.549 "driver_specific": {} 00:09:28.549 } 00:09:28.549 ] 00:09:28.549 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.549 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:28.549 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:28.549 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:28.549 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:28.549 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.549 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.809 BaseBdev3 00:09:28.809 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.809 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:28.809 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:28.809 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:28.809 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:28.809 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:28.809 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:28.809 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:28.809 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.809 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.809 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.809 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:28.809 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.809 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.809 [ 00:09:28.809 { 00:09:28.809 "name": "BaseBdev3", 00:09:28.809 "aliases": [ 00:09:28.809 "fdd55678-a800-444d-ae61-3143e1912e46" 00:09:28.809 ], 00:09:28.809 "product_name": "Malloc disk", 00:09:28.809 "block_size": 512, 00:09:28.809 "num_blocks": 65536, 00:09:28.809 "uuid": "fdd55678-a800-444d-ae61-3143e1912e46", 00:09:28.809 "assigned_rate_limits": { 00:09:28.809 "rw_ios_per_sec": 0, 00:09:28.809 "rw_mbytes_per_sec": 0, 00:09:28.809 "r_mbytes_per_sec": 0, 00:09:28.809 "w_mbytes_per_sec": 0 00:09:28.809 }, 00:09:28.809 "claimed": false, 00:09:28.809 "zoned": false, 00:09:28.809 "supported_io_types": { 00:09:28.809 "read": true, 00:09:28.809 "write": true, 00:09:28.809 "unmap": true, 00:09:28.809 "flush": true, 00:09:28.810 "reset": true, 00:09:28.810 "nvme_admin": false, 00:09:28.810 "nvme_io": false, 00:09:28.810 "nvme_io_md": false, 00:09:28.810 "write_zeroes": true, 00:09:28.810 "zcopy": true, 00:09:28.810 "get_zone_info": false, 00:09:28.810 "zone_management": false, 00:09:28.810 "zone_append": false, 00:09:28.810 "compare": false, 00:09:28.810 "compare_and_write": false, 00:09:28.810 "abort": true, 00:09:28.810 "seek_hole": false, 00:09:28.810 "seek_data": false, 00:09:28.810 "copy": true, 00:09:28.810 "nvme_iov_md": false 00:09:28.810 }, 00:09:28.810 "memory_domains": [ 00:09:28.810 { 00:09:28.810 "dma_device_id": "system", 00:09:28.810 "dma_device_type": 1 00:09:28.810 }, 00:09:28.810 { 00:09:28.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.810 "dma_device_type": 2 00:09:28.810 } 00:09:28.810 ], 00:09:28.810 "driver_specific": {} 00:09:28.810 } 00:09:28.810 ] 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.810 BaseBdev4 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.810 [ 00:09:28.810 { 00:09:28.810 "name": "BaseBdev4", 00:09:28.810 "aliases": [ 00:09:28.810 "e8c42f53-c08f-4903-8e89-c61eb2dbb75c" 00:09:28.810 ], 00:09:28.810 "product_name": "Malloc disk", 00:09:28.810 "block_size": 512, 00:09:28.810 "num_blocks": 65536, 00:09:28.810 "uuid": "e8c42f53-c08f-4903-8e89-c61eb2dbb75c", 00:09:28.810 "assigned_rate_limits": { 00:09:28.810 "rw_ios_per_sec": 0, 00:09:28.810 "rw_mbytes_per_sec": 0, 00:09:28.810 "r_mbytes_per_sec": 0, 00:09:28.810 "w_mbytes_per_sec": 0 00:09:28.810 }, 00:09:28.810 "claimed": false, 00:09:28.810 "zoned": false, 00:09:28.810 "supported_io_types": { 00:09:28.810 "read": true, 00:09:28.810 "write": true, 00:09:28.810 "unmap": true, 00:09:28.810 "flush": true, 00:09:28.810 "reset": true, 00:09:28.810 "nvme_admin": false, 00:09:28.810 "nvme_io": false, 00:09:28.810 "nvme_io_md": false, 00:09:28.810 "write_zeroes": true, 00:09:28.810 "zcopy": true, 00:09:28.810 "get_zone_info": false, 00:09:28.810 "zone_management": false, 00:09:28.810 "zone_append": false, 00:09:28.810 "compare": false, 00:09:28.810 "compare_and_write": false, 00:09:28.810 "abort": true, 00:09:28.810 "seek_hole": false, 00:09:28.810 "seek_data": false, 00:09:28.810 "copy": true, 00:09:28.810 "nvme_iov_md": false 00:09:28.810 }, 00:09:28.810 "memory_domains": [ 00:09:28.810 { 00:09:28.810 "dma_device_id": "system", 00:09:28.810 "dma_device_type": 1 00:09:28.810 }, 00:09:28.810 { 00:09:28.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.810 "dma_device_type": 2 00:09:28.810 } 00:09:28.810 ], 00:09:28.810 "driver_specific": {} 00:09:28.810 } 00:09:28.810 ] 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.810 [2024-12-06 11:52:26.410666] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:28.810 [2024-12-06 11:52:26.410708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:28.810 [2024-12-06 11:52:26.410727] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:28.810 [2024-12-06 11:52:26.412509] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:28.810 [2024-12-06 11:52:26.412554] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.810 "name": "Existed_Raid", 00:09:28.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.810 "strip_size_kb": 64, 00:09:28.810 "state": "configuring", 00:09:28.810 "raid_level": "concat", 00:09:28.810 "superblock": false, 00:09:28.810 "num_base_bdevs": 4, 00:09:28.810 "num_base_bdevs_discovered": 3, 00:09:28.810 "num_base_bdevs_operational": 4, 00:09:28.810 "base_bdevs_list": [ 00:09:28.810 { 00:09:28.810 "name": "BaseBdev1", 00:09:28.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.810 "is_configured": false, 00:09:28.810 "data_offset": 0, 00:09:28.810 "data_size": 0 00:09:28.810 }, 00:09:28.810 { 00:09:28.810 "name": "BaseBdev2", 00:09:28.810 "uuid": "6a6fa117-6a63-4e7a-95c9-a98e770835f7", 00:09:28.810 "is_configured": true, 00:09:28.810 "data_offset": 0, 00:09:28.810 "data_size": 65536 00:09:28.810 }, 00:09:28.810 { 00:09:28.810 "name": "BaseBdev3", 00:09:28.810 "uuid": "fdd55678-a800-444d-ae61-3143e1912e46", 00:09:28.810 "is_configured": true, 00:09:28.810 "data_offset": 0, 00:09:28.810 "data_size": 65536 00:09:28.810 }, 00:09:28.810 { 00:09:28.810 "name": "BaseBdev4", 00:09:28.810 "uuid": "e8c42f53-c08f-4903-8e89-c61eb2dbb75c", 00:09:28.810 "is_configured": true, 00:09:28.810 "data_offset": 0, 00:09:28.810 "data_size": 65536 00:09:28.810 } 00:09:28.810 ] 00:09:28.810 }' 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.810 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.071 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:29.071 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.071 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.071 [2024-12-06 11:52:26.805950] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:29.071 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.071 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:29.071 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.071 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.071 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.071 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.071 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:29.071 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.071 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.071 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.071 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.071 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.071 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.071 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.071 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.330 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.330 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.330 "name": "Existed_Raid", 00:09:29.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.331 "strip_size_kb": 64, 00:09:29.331 "state": "configuring", 00:09:29.331 "raid_level": "concat", 00:09:29.331 "superblock": false, 00:09:29.331 "num_base_bdevs": 4, 00:09:29.331 "num_base_bdevs_discovered": 2, 00:09:29.331 "num_base_bdevs_operational": 4, 00:09:29.331 "base_bdevs_list": [ 00:09:29.331 { 00:09:29.331 "name": "BaseBdev1", 00:09:29.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.331 "is_configured": false, 00:09:29.331 "data_offset": 0, 00:09:29.331 "data_size": 0 00:09:29.331 }, 00:09:29.331 { 00:09:29.331 "name": null, 00:09:29.331 "uuid": "6a6fa117-6a63-4e7a-95c9-a98e770835f7", 00:09:29.331 "is_configured": false, 00:09:29.331 "data_offset": 0, 00:09:29.331 "data_size": 65536 00:09:29.331 }, 00:09:29.331 { 00:09:29.331 "name": "BaseBdev3", 00:09:29.331 "uuid": "fdd55678-a800-444d-ae61-3143e1912e46", 00:09:29.331 "is_configured": true, 00:09:29.331 "data_offset": 0, 00:09:29.331 "data_size": 65536 00:09:29.331 }, 00:09:29.331 { 00:09:29.331 "name": "BaseBdev4", 00:09:29.331 "uuid": "e8c42f53-c08f-4903-8e89-c61eb2dbb75c", 00:09:29.331 "is_configured": true, 00:09:29.331 "data_offset": 0, 00:09:29.331 "data_size": 65536 00:09:29.331 } 00:09:29.331 ] 00:09:29.331 }' 00:09:29.331 11:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.331 11:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.590 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:29.590 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.590 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.590 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.590 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.590 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:29.590 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:29.590 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.590 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.590 [2024-12-06 11:52:27.268047] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:29.590 BaseBdev1 00:09:29.590 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.590 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:29.590 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:29.590 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:29.590 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:29.590 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:29.590 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:29.590 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:29.590 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.590 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.590 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.590 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:29.590 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.590 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.590 [ 00:09:29.590 { 00:09:29.590 "name": "BaseBdev1", 00:09:29.590 "aliases": [ 00:09:29.590 "880f21e3-737e-4cef-ae28-c79088e8b95c" 00:09:29.590 ], 00:09:29.590 "product_name": "Malloc disk", 00:09:29.590 "block_size": 512, 00:09:29.590 "num_blocks": 65536, 00:09:29.590 "uuid": "880f21e3-737e-4cef-ae28-c79088e8b95c", 00:09:29.590 "assigned_rate_limits": { 00:09:29.590 "rw_ios_per_sec": 0, 00:09:29.590 "rw_mbytes_per_sec": 0, 00:09:29.590 "r_mbytes_per_sec": 0, 00:09:29.590 "w_mbytes_per_sec": 0 00:09:29.590 }, 00:09:29.590 "claimed": true, 00:09:29.590 "claim_type": "exclusive_write", 00:09:29.590 "zoned": false, 00:09:29.590 "supported_io_types": { 00:09:29.590 "read": true, 00:09:29.590 "write": true, 00:09:29.590 "unmap": true, 00:09:29.590 "flush": true, 00:09:29.590 "reset": true, 00:09:29.590 "nvme_admin": false, 00:09:29.590 "nvme_io": false, 00:09:29.590 "nvme_io_md": false, 00:09:29.590 "write_zeroes": true, 00:09:29.590 "zcopy": true, 00:09:29.590 "get_zone_info": false, 00:09:29.590 "zone_management": false, 00:09:29.590 "zone_append": false, 00:09:29.590 "compare": false, 00:09:29.590 "compare_and_write": false, 00:09:29.590 "abort": true, 00:09:29.590 "seek_hole": false, 00:09:29.590 "seek_data": false, 00:09:29.590 "copy": true, 00:09:29.590 "nvme_iov_md": false 00:09:29.590 }, 00:09:29.590 "memory_domains": [ 00:09:29.590 { 00:09:29.590 "dma_device_id": "system", 00:09:29.590 "dma_device_type": 1 00:09:29.590 }, 00:09:29.590 { 00:09:29.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.590 "dma_device_type": 2 00:09:29.590 } 00:09:29.590 ], 00:09:29.590 "driver_specific": {} 00:09:29.590 } 00:09:29.590 ] 00:09:29.590 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.590 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:29.590 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:29.590 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.591 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.591 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.591 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.591 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:29.591 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.591 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.591 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.591 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.591 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.591 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.591 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.591 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.591 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.851 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.851 "name": "Existed_Raid", 00:09:29.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.851 "strip_size_kb": 64, 00:09:29.851 "state": "configuring", 00:09:29.851 "raid_level": "concat", 00:09:29.851 "superblock": false, 00:09:29.851 "num_base_bdevs": 4, 00:09:29.851 "num_base_bdevs_discovered": 3, 00:09:29.851 "num_base_bdevs_operational": 4, 00:09:29.851 "base_bdevs_list": [ 00:09:29.851 { 00:09:29.851 "name": "BaseBdev1", 00:09:29.851 "uuid": "880f21e3-737e-4cef-ae28-c79088e8b95c", 00:09:29.851 "is_configured": true, 00:09:29.851 "data_offset": 0, 00:09:29.851 "data_size": 65536 00:09:29.851 }, 00:09:29.851 { 00:09:29.851 "name": null, 00:09:29.851 "uuid": "6a6fa117-6a63-4e7a-95c9-a98e770835f7", 00:09:29.851 "is_configured": false, 00:09:29.851 "data_offset": 0, 00:09:29.851 "data_size": 65536 00:09:29.851 }, 00:09:29.851 { 00:09:29.851 "name": "BaseBdev3", 00:09:29.851 "uuid": "fdd55678-a800-444d-ae61-3143e1912e46", 00:09:29.851 "is_configured": true, 00:09:29.851 "data_offset": 0, 00:09:29.851 "data_size": 65536 00:09:29.851 }, 00:09:29.851 { 00:09:29.851 "name": "BaseBdev4", 00:09:29.851 "uuid": "e8c42f53-c08f-4903-8e89-c61eb2dbb75c", 00:09:29.851 "is_configured": true, 00:09:29.851 "data_offset": 0, 00:09:29.851 "data_size": 65536 00:09:29.851 } 00:09:29.851 ] 00:09:29.851 }' 00:09:29.851 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.851 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.112 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.112 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.112 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:30.112 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.112 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.112 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:30.112 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:30.112 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.112 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.112 [2024-12-06 11:52:27.759244] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:30.112 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.112 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:30.112 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.112 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.112 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.112 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.112 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:30.112 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.112 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.112 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.112 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.112 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.112 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.112 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.112 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.112 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.112 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.112 "name": "Existed_Raid", 00:09:30.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.112 "strip_size_kb": 64, 00:09:30.112 "state": "configuring", 00:09:30.112 "raid_level": "concat", 00:09:30.112 "superblock": false, 00:09:30.112 "num_base_bdevs": 4, 00:09:30.112 "num_base_bdevs_discovered": 2, 00:09:30.112 "num_base_bdevs_operational": 4, 00:09:30.112 "base_bdevs_list": [ 00:09:30.112 { 00:09:30.112 "name": "BaseBdev1", 00:09:30.112 "uuid": "880f21e3-737e-4cef-ae28-c79088e8b95c", 00:09:30.112 "is_configured": true, 00:09:30.112 "data_offset": 0, 00:09:30.112 "data_size": 65536 00:09:30.112 }, 00:09:30.112 { 00:09:30.112 "name": null, 00:09:30.112 "uuid": "6a6fa117-6a63-4e7a-95c9-a98e770835f7", 00:09:30.112 "is_configured": false, 00:09:30.112 "data_offset": 0, 00:09:30.112 "data_size": 65536 00:09:30.112 }, 00:09:30.112 { 00:09:30.112 "name": null, 00:09:30.112 "uuid": "fdd55678-a800-444d-ae61-3143e1912e46", 00:09:30.112 "is_configured": false, 00:09:30.112 "data_offset": 0, 00:09:30.112 "data_size": 65536 00:09:30.112 }, 00:09:30.112 { 00:09:30.112 "name": "BaseBdev4", 00:09:30.112 "uuid": "e8c42f53-c08f-4903-8e89-c61eb2dbb75c", 00:09:30.112 "is_configured": true, 00:09:30.112 "data_offset": 0, 00:09:30.112 "data_size": 65536 00:09:30.112 } 00:09:30.112 ] 00:09:30.112 }' 00:09:30.112 11:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.112 11:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.682 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:30.682 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.682 11:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.682 11:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.682 11:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.682 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:30.682 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:30.682 11:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.682 11:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.682 [2024-12-06 11:52:28.222478] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:30.682 11:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.682 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:30.682 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.682 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.682 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.682 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.682 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:30.682 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.682 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.682 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.682 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.682 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.682 11:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.682 11:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.682 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.682 11:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.682 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.682 "name": "Existed_Raid", 00:09:30.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.682 "strip_size_kb": 64, 00:09:30.682 "state": "configuring", 00:09:30.682 "raid_level": "concat", 00:09:30.682 "superblock": false, 00:09:30.682 "num_base_bdevs": 4, 00:09:30.682 "num_base_bdevs_discovered": 3, 00:09:30.682 "num_base_bdevs_operational": 4, 00:09:30.682 "base_bdevs_list": [ 00:09:30.682 { 00:09:30.682 "name": "BaseBdev1", 00:09:30.682 "uuid": "880f21e3-737e-4cef-ae28-c79088e8b95c", 00:09:30.682 "is_configured": true, 00:09:30.682 "data_offset": 0, 00:09:30.682 "data_size": 65536 00:09:30.682 }, 00:09:30.682 { 00:09:30.682 "name": null, 00:09:30.682 "uuid": "6a6fa117-6a63-4e7a-95c9-a98e770835f7", 00:09:30.682 "is_configured": false, 00:09:30.682 "data_offset": 0, 00:09:30.682 "data_size": 65536 00:09:30.682 }, 00:09:30.682 { 00:09:30.682 "name": "BaseBdev3", 00:09:30.682 "uuid": "fdd55678-a800-444d-ae61-3143e1912e46", 00:09:30.682 "is_configured": true, 00:09:30.682 "data_offset": 0, 00:09:30.682 "data_size": 65536 00:09:30.682 }, 00:09:30.682 { 00:09:30.682 "name": "BaseBdev4", 00:09:30.682 "uuid": "e8c42f53-c08f-4903-8e89-c61eb2dbb75c", 00:09:30.682 "is_configured": true, 00:09:30.682 "data_offset": 0, 00:09:30.682 "data_size": 65536 00:09:30.682 } 00:09:30.682 ] 00:09:30.682 }' 00:09:30.682 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.682 11:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.942 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.942 11:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.942 11:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.942 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:30.942 11:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.942 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:30.942 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:30.942 11:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.942 11:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.942 [2024-12-06 11:52:28.693656] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:31.202 11:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.202 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:31.202 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.202 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.202 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.202 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.202 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:31.202 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.202 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.202 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.202 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.202 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.202 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.202 11:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.202 11:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.202 11:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.202 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.202 "name": "Existed_Raid", 00:09:31.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.202 "strip_size_kb": 64, 00:09:31.202 "state": "configuring", 00:09:31.202 "raid_level": "concat", 00:09:31.202 "superblock": false, 00:09:31.202 "num_base_bdevs": 4, 00:09:31.202 "num_base_bdevs_discovered": 2, 00:09:31.202 "num_base_bdevs_operational": 4, 00:09:31.202 "base_bdevs_list": [ 00:09:31.202 { 00:09:31.202 "name": null, 00:09:31.202 "uuid": "880f21e3-737e-4cef-ae28-c79088e8b95c", 00:09:31.202 "is_configured": false, 00:09:31.202 "data_offset": 0, 00:09:31.202 "data_size": 65536 00:09:31.202 }, 00:09:31.202 { 00:09:31.202 "name": null, 00:09:31.202 "uuid": "6a6fa117-6a63-4e7a-95c9-a98e770835f7", 00:09:31.202 "is_configured": false, 00:09:31.202 "data_offset": 0, 00:09:31.202 "data_size": 65536 00:09:31.202 }, 00:09:31.202 { 00:09:31.202 "name": "BaseBdev3", 00:09:31.202 "uuid": "fdd55678-a800-444d-ae61-3143e1912e46", 00:09:31.202 "is_configured": true, 00:09:31.202 "data_offset": 0, 00:09:31.202 "data_size": 65536 00:09:31.202 }, 00:09:31.202 { 00:09:31.202 "name": "BaseBdev4", 00:09:31.202 "uuid": "e8c42f53-c08f-4903-8e89-c61eb2dbb75c", 00:09:31.202 "is_configured": true, 00:09:31.202 "data_offset": 0, 00:09:31.202 "data_size": 65536 00:09:31.202 } 00:09:31.202 ] 00:09:31.202 }' 00:09:31.202 11:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.202 11:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.462 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.462 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.462 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.462 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:31.462 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.462 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:31.462 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:31.462 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.462 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.462 [2024-12-06 11:52:29.159229] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:31.462 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.462 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:31.462 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.462 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.462 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.462 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.462 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:31.462 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.462 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.462 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.462 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.462 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.462 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.462 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.462 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.462 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.462 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.462 "name": "Existed_Raid", 00:09:31.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.462 "strip_size_kb": 64, 00:09:31.462 "state": "configuring", 00:09:31.462 "raid_level": "concat", 00:09:31.462 "superblock": false, 00:09:31.462 "num_base_bdevs": 4, 00:09:31.462 "num_base_bdevs_discovered": 3, 00:09:31.462 "num_base_bdevs_operational": 4, 00:09:31.462 "base_bdevs_list": [ 00:09:31.462 { 00:09:31.462 "name": null, 00:09:31.462 "uuid": "880f21e3-737e-4cef-ae28-c79088e8b95c", 00:09:31.462 "is_configured": false, 00:09:31.462 "data_offset": 0, 00:09:31.462 "data_size": 65536 00:09:31.462 }, 00:09:31.462 { 00:09:31.462 "name": "BaseBdev2", 00:09:31.462 "uuid": "6a6fa117-6a63-4e7a-95c9-a98e770835f7", 00:09:31.462 "is_configured": true, 00:09:31.462 "data_offset": 0, 00:09:31.462 "data_size": 65536 00:09:31.462 }, 00:09:31.462 { 00:09:31.462 "name": "BaseBdev3", 00:09:31.462 "uuid": "fdd55678-a800-444d-ae61-3143e1912e46", 00:09:31.463 "is_configured": true, 00:09:31.463 "data_offset": 0, 00:09:31.463 "data_size": 65536 00:09:31.463 }, 00:09:31.463 { 00:09:31.463 "name": "BaseBdev4", 00:09:31.463 "uuid": "e8c42f53-c08f-4903-8e89-c61eb2dbb75c", 00:09:31.463 "is_configured": true, 00:09:31.463 "data_offset": 0, 00:09:31.463 "data_size": 65536 00:09:31.463 } 00:09:31.463 ] 00:09:31.463 }' 00:09:31.463 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.463 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 880f21e3-737e-4cef-ae28-c79088e8b95c 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.034 [2024-12-06 11:52:29.697016] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:32.034 [2024-12-06 11:52:29.697063] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:32.034 [2024-12-06 11:52:29.697070] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:32.034 [2024-12-06 11:52:29.697337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:09:32.034 [2024-12-06 11:52:29.697452] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:32.034 [2024-12-06 11:52:29.697470] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:32.034 [2024-12-06 11:52:29.697630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.034 NewBaseBdev 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.034 [ 00:09:32.034 { 00:09:32.034 "name": "NewBaseBdev", 00:09:32.034 "aliases": [ 00:09:32.034 "880f21e3-737e-4cef-ae28-c79088e8b95c" 00:09:32.034 ], 00:09:32.034 "product_name": "Malloc disk", 00:09:32.034 "block_size": 512, 00:09:32.034 "num_blocks": 65536, 00:09:32.034 "uuid": "880f21e3-737e-4cef-ae28-c79088e8b95c", 00:09:32.034 "assigned_rate_limits": { 00:09:32.034 "rw_ios_per_sec": 0, 00:09:32.034 "rw_mbytes_per_sec": 0, 00:09:32.034 "r_mbytes_per_sec": 0, 00:09:32.034 "w_mbytes_per_sec": 0 00:09:32.034 }, 00:09:32.034 "claimed": true, 00:09:32.034 "claim_type": "exclusive_write", 00:09:32.034 "zoned": false, 00:09:32.034 "supported_io_types": { 00:09:32.034 "read": true, 00:09:32.034 "write": true, 00:09:32.034 "unmap": true, 00:09:32.034 "flush": true, 00:09:32.034 "reset": true, 00:09:32.034 "nvme_admin": false, 00:09:32.034 "nvme_io": false, 00:09:32.034 "nvme_io_md": false, 00:09:32.034 "write_zeroes": true, 00:09:32.034 "zcopy": true, 00:09:32.034 "get_zone_info": false, 00:09:32.034 "zone_management": false, 00:09:32.034 "zone_append": false, 00:09:32.034 "compare": false, 00:09:32.034 "compare_and_write": false, 00:09:32.034 "abort": true, 00:09:32.034 "seek_hole": false, 00:09:32.034 "seek_data": false, 00:09:32.034 "copy": true, 00:09:32.034 "nvme_iov_md": false 00:09:32.034 }, 00:09:32.034 "memory_domains": [ 00:09:32.034 { 00:09:32.034 "dma_device_id": "system", 00:09:32.034 "dma_device_type": 1 00:09:32.034 }, 00:09:32.034 { 00:09:32.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.034 "dma_device_type": 2 00:09:32.034 } 00:09:32.034 ], 00:09:32.034 "driver_specific": {} 00:09:32.034 } 00:09:32.034 ] 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.034 "name": "Existed_Raid", 00:09:32.034 "uuid": "dd93dabc-996a-4884-81c2-033983e1ff63", 00:09:32.034 "strip_size_kb": 64, 00:09:32.034 "state": "online", 00:09:32.034 "raid_level": "concat", 00:09:32.034 "superblock": false, 00:09:32.034 "num_base_bdevs": 4, 00:09:32.034 "num_base_bdevs_discovered": 4, 00:09:32.034 "num_base_bdevs_operational": 4, 00:09:32.034 "base_bdevs_list": [ 00:09:32.034 { 00:09:32.034 "name": "NewBaseBdev", 00:09:32.034 "uuid": "880f21e3-737e-4cef-ae28-c79088e8b95c", 00:09:32.034 "is_configured": true, 00:09:32.034 "data_offset": 0, 00:09:32.034 "data_size": 65536 00:09:32.034 }, 00:09:32.034 { 00:09:32.034 "name": "BaseBdev2", 00:09:32.034 "uuid": "6a6fa117-6a63-4e7a-95c9-a98e770835f7", 00:09:32.034 "is_configured": true, 00:09:32.034 "data_offset": 0, 00:09:32.034 "data_size": 65536 00:09:32.034 }, 00:09:32.034 { 00:09:32.034 "name": "BaseBdev3", 00:09:32.034 "uuid": "fdd55678-a800-444d-ae61-3143e1912e46", 00:09:32.034 "is_configured": true, 00:09:32.034 "data_offset": 0, 00:09:32.034 "data_size": 65536 00:09:32.034 }, 00:09:32.034 { 00:09:32.034 "name": "BaseBdev4", 00:09:32.034 "uuid": "e8c42f53-c08f-4903-8e89-c61eb2dbb75c", 00:09:32.034 "is_configured": true, 00:09:32.034 "data_offset": 0, 00:09:32.034 "data_size": 65536 00:09:32.034 } 00:09:32.034 ] 00:09:32.034 }' 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.034 11:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:32.603 [2024-12-06 11:52:30.156572] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:32.603 "name": "Existed_Raid", 00:09:32.603 "aliases": [ 00:09:32.603 "dd93dabc-996a-4884-81c2-033983e1ff63" 00:09:32.603 ], 00:09:32.603 "product_name": "Raid Volume", 00:09:32.603 "block_size": 512, 00:09:32.603 "num_blocks": 262144, 00:09:32.603 "uuid": "dd93dabc-996a-4884-81c2-033983e1ff63", 00:09:32.603 "assigned_rate_limits": { 00:09:32.603 "rw_ios_per_sec": 0, 00:09:32.603 "rw_mbytes_per_sec": 0, 00:09:32.603 "r_mbytes_per_sec": 0, 00:09:32.603 "w_mbytes_per_sec": 0 00:09:32.603 }, 00:09:32.603 "claimed": false, 00:09:32.603 "zoned": false, 00:09:32.603 "supported_io_types": { 00:09:32.603 "read": true, 00:09:32.603 "write": true, 00:09:32.603 "unmap": true, 00:09:32.603 "flush": true, 00:09:32.603 "reset": true, 00:09:32.603 "nvme_admin": false, 00:09:32.603 "nvme_io": false, 00:09:32.603 "nvme_io_md": false, 00:09:32.603 "write_zeroes": true, 00:09:32.603 "zcopy": false, 00:09:32.603 "get_zone_info": false, 00:09:32.603 "zone_management": false, 00:09:32.603 "zone_append": false, 00:09:32.603 "compare": false, 00:09:32.603 "compare_and_write": false, 00:09:32.603 "abort": false, 00:09:32.603 "seek_hole": false, 00:09:32.603 "seek_data": false, 00:09:32.603 "copy": false, 00:09:32.603 "nvme_iov_md": false 00:09:32.603 }, 00:09:32.603 "memory_domains": [ 00:09:32.603 { 00:09:32.603 "dma_device_id": "system", 00:09:32.603 "dma_device_type": 1 00:09:32.603 }, 00:09:32.603 { 00:09:32.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.603 "dma_device_type": 2 00:09:32.603 }, 00:09:32.603 { 00:09:32.603 "dma_device_id": "system", 00:09:32.603 "dma_device_type": 1 00:09:32.603 }, 00:09:32.603 { 00:09:32.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.603 "dma_device_type": 2 00:09:32.603 }, 00:09:32.603 { 00:09:32.603 "dma_device_id": "system", 00:09:32.603 "dma_device_type": 1 00:09:32.603 }, 00:09:32.603 { 00:09:32.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.603 "dma_device_type": 2 00:09:32.603 }, 00:09:32.603 { 00:09:32.603 "dma_device_id": "system", 00:09:32.603 "dma_device_type": 1 00:09:32.603 }, 00:09:32.603 { 00:09:32.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.603 "dma_device_type": 2 00:09:32.603 } 00:09:32.603 ], 00:09:32.603 "driver_specific": { 00:09:32.603 "raid": { 00:09:32.603 "uuid": "dd93dabc-996a-4884-81c2-033983e1ff63", 00:09:32.603 "strip_size_kb": 64, 00:09:32.603 "state": "online", 00:09:32.603 "raid_level": "concat", 00:09:32.603 "superblock": false, 00:09:32.603 "num_base_bdevs": 4, 00:09:32.603 "num_base_bdevs_discovered": 4, 00:09:32.603 "num_base_bdevs_operational": 4, 00:09:32.603 "base_bdevs_list": [ 00:09:32.603 { 00:09:32.603 "name": "NewBaseBdev", 00:09:32.603 "uuid": "880f21e3-737e-4cef-ae28-c79088e8b95c", 00:09:32.603 "is_configured": true, 00:09:32.603 "data_offset": 0, 00:09:32.603 "data_size": 65536 00:09:32.603 }, 00:09:32.603 { 00:09:32.603 "name": "BaseBdev2", 00:09:32.603 "uuid": "6a6fa117-6a63-4e7a-95c9-a98e770835f7", 00:09:32.603 "is_configured": true, 00:09:32.603 "data_offset": 0, 00:09:32.603 "data_size": 65536 00:09:32.603 }, 00:09:32.603 { 00:09:32.603 "name": "BaseBdev3", 00:09:32.603 "uuid": "fdd55678-a800-444d-ae61-3143e1912e46", 00:09:32.603 "is_configured": true, 00:09:32.603 "data_offset": 0, 00:09:32.603 "data_size": 65536 00:09:32.603 }, 00:09:32.603 { 00:09:32.603 "name": "BaseBdev4", 00:09:32.603 "uuid": "e8c42f53-c08f-4903-8e89-c61eb2dbb75c", 00:09:32.603 "is_configured": true, 00:09:32.603 "data_offset": 0, 00:09:32.603 "data_size": 65536 00:09:32.603 } 00:09:32.603 ] 00:09:32.603 } 00:09:32.603 } 00:09:32.603 }' 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:32.603 BaseBdev2 00:09:32.603 BaseBdev3 00:09:32.603 BaseBdev4' 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.603 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.604 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.863 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.863 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.863 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.863 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.863 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:32.863 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.863 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.863 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.863 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.863 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.863 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.863 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:32.863 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.863 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.863 [2024-12-06 11:52:30.451753] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:32.863 [2024-12-06 11:52:30.451781] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:32.863 [2024-12-06 11:52:30.451844] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:32.863 [2024-12-06 11:52:30.451904] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:32.863 [2024-12-06 11:52:30.451914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:32.863 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.863 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 81833 00:09:32.863 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 81833 ']' 00:09:32.863 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 81833 00:09:32.863 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:32.863 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:32.863 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81833 00:09:32.863 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:32.863 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:32.863 killing process with pid 81833 00:09:32.863 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81833' 00:09:32.863 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 81833 00:09:32.863 [2024-12-06 11:52:30.501012] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:32.863 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 81833 00:09:32.863 [2024-12-06 11:52:30.541134] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:33.122 11:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:33.122 00:09:33.122 real 0m9.130s 00:09:33.122 user 0m15.653s 00:09:33.122 sys 0m1.845s 00:09:33.122 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:33.122 11:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.122 ************************************ 00:09:33.122 END TEST raid_state_function_test 00:09:33.122 ************************************ 00:09:33.122 11:52:30 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:09:33.122 11:52:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:33.122 11:52:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:33.122 11:52:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:33.122 ************************************ 00:09:33.122 START TEST raid_state_function_test_sb 00:09:33.122 ************************************ 00:09:33.122 11:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:09:33.122 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:33.122 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:33.122 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:33.122 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:33.122 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:33.122 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:33.122 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:33.122 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:33.122 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:33.122 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:33.122 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:33.122 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:33.122 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:33.122 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:33.122 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:33.123 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:33.123 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:33.123 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:33.123 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:33.123 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:33.123 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:33.123 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:33.123 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:33.123 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:33.123 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:33.123 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:33.123 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:33.123 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:33.123 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:33.123 Process raid pid: 82477 00:09:33.123 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=82477 00:09:33.123 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:33.123 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82477' 00:09:33.123 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 82477 00:09:33.123 11:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 82477 ']' 00:09:33.123 11:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.123 11:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:33.123 11:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.123 11:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:33.123 11:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.382 [2024-12-06 11:52:30.955142] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:33.382 [2024-12-06 11:52:30.955321] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.382 [2024-12-06 11:52:31.102308] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.640 [2024-12-06 11:52:31.147089] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.640 [2024-12-06 11:52:31.189140] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.640 [2024-12-06 11:52:31.189240] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.209 11:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:34.209 11:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:34.209 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:34.209 11:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.209 11:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.209 [2024-12-06 11:52:31.782183] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:34.209 [2024-12-06 11:52:31.782239] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:34.209 [2024-12-06 11:52:31.782250] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.209 [2024-12-06 11:52:31.782260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.209 [2024-12-06 11:52:31.782265] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:34.209 [2024-12-06 11:52:31.782277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:34.209 [2024-12-06 11:52:31.782283] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:34.209 [2024-12-06 11:52:31.782291] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:34.209 11:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.209 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:34.209 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.209 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.209 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.209 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.209 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:34.209 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.209 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.209 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.209 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.209 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.209 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.209 11:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.209 11:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.209 11:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.209 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.209 "name": "Existed_Raid", 00:09:34.209 "uuid": "e7305c1f-0e8f-4393-97b6-ffd4671e1bd7", 00:09:34.209 "strip_size_kb": 64, 00:09:34.209 "state": "configuring", 00:09:34.209 "raid_level": "concat", 00:09:34.209 "superblock": true, 00:09:34.209 "num_base_bdevs": 4, 00:09:34.209 "num_base_bdevs_discovered": 0, 00:09:34.209 "num_base_bdevs_operational": 4, 00:09:34.209 "base_bdevs_list": [ 00:09:34.209 { 00:09:34.209 "name": "BaseBdev1", 00:09:34.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.209 "is_configured": false, 00:09:34.209 "data_offset": 0, 00:09:34.209 "data_size": 0 00:09:34.210 }, 00:09:34.210 { 00:09:34.210 "name": "BaseBdev2", 00:09:34.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.210 "is_configured": false, 00:09:34.210 "data_offset": 0, 00:09:34.210 "data_size": 0 00:09:34.210 }, 00:09:34.210 { 00:09:34.210 "name": "BaseBdev3", 00:09:34.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.210 "is_configured": false, 00:09:34.210 "data_offset": 0, 00:09:34.210 "data_size": 0 00:09:34.210 }, 00:09:34.210 { 00:09:34.210 "name": "BaseBdev4", 00:09:34.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.210 "is_configured": false, 00:09:34.210 "data_offset": 0, 00:09:34.210 "data_size": 0 00:09:34.210 } 00:09:34.210 ] 00:09:34.210 }' 00:09:34.210 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.210 11:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.470 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:34.470 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.470 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.470 [2024-12-06 11:52:32.209338] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:34.470 [2024-12-06 11:52:32.209420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:34.470 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.470 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:34.470 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.470 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.470 [2024-12-06 11:52:32.221367] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:34.470 [2024-12-06 11:52:32.221438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:34.470 [2024-12-06 11:52:32.221463] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.470 [2024-12-06 11:52:32.221485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.470 [2024-12-06 11:52:32.221502] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:34.470 [2024-12-06 11:52:32.221521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:34.470 [2024-12-06 11:52:32.221538] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:34.470 [2024-12-06 11:52:32.221558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:34.470 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.730 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:34.730 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.730 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.730 [2024-12-06 11:52:32.242087] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.730 BaseBdev1 00:09:34.730 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.730 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:34.730 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:34.730 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:34.730 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:34.730 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:34.730 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:34.730 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:34.730 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.730 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.730 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.730 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:34.730 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.730 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.730 [ 00:09:34.730 { 00:09:34.730 "name": "BaseBdev1", 00:09:34.730 "aliases": [ 00:09:34.730 "27bf582c-9883-4818-8c43-85a7a493f9f4" 00:09:34.730 ], 00:09:34.730 "product_name": "Malloc disk", 00:09:34.730 "block_size": 512, 00:09:34.731 "num_blocks": 65536, 00:09:34.731 "uuid": "27bf582c-9883-4818-8c43-85a7a493f9f4", 00:09:34.731 "assigned_rate_limits": { 00:09:34.731 "rw_ios_per_sec": 0, 00:09:34.731 "rw_mbytes_per_sec": 0, 00:09:34.731 "r_mbytes_per_sec": 0, 00:09:34.731 "w_mbytes_per_sec": 0 00:09:34.731 }, 00:09:34.731 "claimed": true, 00:09:34.731 "claim_type": "exclusive_write", 00:09:34.731 "zoned": false, 00:09:34.731 "supported_io_types": { 00:09:34.731 "read": true, 00:09:34.731 "write": true, 00:09:34.731 "unmap": true, 00:09:34.731 "flush": true, 00:09:34.731 "reset": true, 00:09:34.731 "nvme_admin": false, 00:09:34.731 "nvme_io": false, 00:09:34.731 "nvme_io_md": false, 00:09:34.731 "write_zeroes": true, 00:09:34.731 "zcopy": true, 00:09:34.731 "get_zone_info": false, 00:09:34.731 "zone_management": false, 00:09:34.731 "zone_append": false, 00:09:34.731 "compare": false, 00:09:34.731 "compare_and_write": false, 00:09:34.731 "abort": true, 00:09:34.731 "seek_hole": false, 00:09:34.731 "seek_data": false, 00:09:34.731 "copy": true, 00:09:34.731 "nvme_iov_md": false 00:09:34.731 }, 00:09:34.731 "memory_domains": [ 00:09:34.731 { 00:09:34.731 "dma_device_id": "system", 00:09:34.731 "dma_device_type": 1 00:09:34.731 }, 00:09:34.731 { 00:09:34.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.731 "dma_device_type": 2 00:09:34.731 } 00:09:34.731 ], 00:09:34.731 "driver_specific": {} 00:09:34.731 } 00:09:34.731 ] 00:09:34.731 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.731 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:34.731 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:34.731 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.731 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.731 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.731 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.731 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:34.731 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.731 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.731 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.731 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.731 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.731 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.731 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.731 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.731 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.731 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.731 "name": "Existed_Raid", 00:09:34.731 "uuid": "24359c12-a271-4c58-871f-4eb6ff75689c", 00:09:34.731 "strip_size_kb": 64, 00:09:34.731 "state": "configuring", 00:09:34.731 "raid_level": "concat", 00:09:34.731 "superblock": true, 00:09:34.731 "num_base_bdevs": 4, 00:09:34.731 "num_base_bdevs_discovered": 1, 00:09:34.731 "num_base_bdevs_operational": 4, 00:09:34.731 "base_bdevs_list": [ 00:09:34.731 { 00:09:34.731 "name": "BaseBdev1", 00:09:34.731 "uuid": "27bf582c-9883-4818-8c43-85a7a493f9f4", 00:09:34.731 "is_configured": true, 00:09:34.731 "data_offset": 2048, 00:09:34.731 "data_size": 63488 00:09:34.731 }, 00:09:34.731 { 00:09:34.731 "name": "BaseBdev2", 00:09:34.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.731 "is_configured": false, 00:09:34.731 "data_offset": 0, 00:09:34.731 "data_size": 0 00:09:34.731 }, 00:09:34.731 { 00:09:34.731 "name": "BaseBdev3", 00:09:34.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.731 "is_configured": false, 00:09:34.731 "data_offset": 0, 00:09:34.731 "data_size": 0 00:09:34.731 }, 00:09:34.731 { 00:09:34.731 "name": "BaseBdev4", 00:09:34.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.731 "is_configured": false, 00:09:34.731 "data_offset": 0, 00:09:34.731 "data_size": 0 00:09:34.731 } 00:09:34.731 ] 00:09:34.731 }' 00:09:34.731 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.731 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.300 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:35.300 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.300 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.300 [2024-12-06 11:52:32.757225] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:35.300 [2024-12-06 11:52:32.757274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:35.300 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.300 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:35.300 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.300 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.300 [2024-12-06 11:52:32.765287] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:35.300 [2024-12-06 11:52:32.767080] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:35.300 [2024-12-06 11:52:32.767118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:35.300 [2024-12-06 11:52:32.767127] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:35.300 [2024-12-06 11:52:32.767135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:35.300 [2024-12-06 11:52:32.767141] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:35.300 [2024-12-06 11:52:32.767148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:35.300 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.300 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:35.300 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:35.300 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:35.300 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.300 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.300 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.300 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.300 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:35.300 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.300 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.300 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.300 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.300 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.300 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.300 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.300 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.300 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.300 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.300 "name": "Existed_Raid", 00:09:35.300 "uuid": "24c07857-6f24-49cb-9f32-4520ac593d55", 00:09:35.300 "strip_size_kb": 64, 00:09:35.300 "state": "configuring", 00:09:35.300 "raid_level": "concat", 00:09:35.300 "superblock": true, 00:09:35.300 "num_base_bdevs": 4, 00:09:35.300 "num_base_bdevs_discovered": 1, 00:09:35.300 "num_base_bdevs_operational": 4, 00:09:35.300 "base_bdevs_list": [ 00:09:35.300 { 00:09:35.300 "name": "BaseBdev1", 00:09:35.300 "uuid": "27bf582c-9883-4818-8c43-85a7a493f9f4", 00:09:35.300 "is_configured": true, 00:09:35.300 "data_offset": 2048, 00:09:35.300 "data_size": 63488 00:09:35.300 }, 00:09:35.300 { 00:09:35.300 "name": "BaseBdev2", 00:09:35.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.300 "is_configured": false, 00:09:35.300 "data_offset": 0, 00:09:35.300 "data_size": 0 00:09:35.300 }, 00:09:35.300 { 00:09:35.300 "name": "BaseBdev3", 00:09:35.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.300 "is_configured": false, 00:09:35.300 "data_offset": 0, 00:09:35.300 "data_size": 0 00:09:35.300 }, 00:09:35.300 { 00:09:35.300 "name": "BaseBdev4", 00:09:35.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.300 "is_configured": false, 00:09:35.300 "data_offset": 0, 00:09:35.300 "data_size": 0 00:09:35.300 } 00:09:35.300 ] 00:09:35.300 }' 00:09:35.300 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.300 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.560 [2024-12-06 11:52:33.223942] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:35.560 BaseBdev2 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.560 [ 00:09:35.560 { 00:09:35.560 "name": "BaseBdev2", 00:09:35.560 "aliases": [ 00:09:35.560 "a0a0d208-8f12-46e5-88b2-cedde395cf76" 00:09:35.560 ], 00:09:35.560 "product_name": "Malloc disk", 00:09:35.560 "block_size": 512, 00:09:35.560 "num_blocks": 65536, 00:09:35.560 "uuid": "a0a0d208-8f12-46e5-88b2-cedde395cf76", 00:09:35.560 "assigned_rate_limits": { 00:09:35.560 "rw_ios_per_sec": 0, 00:09:35.560 "rw_mbytes_per_sec": 0, 00:09:35.560 "r_mbytes_per_sec": 0, 00:09:35.560 "w_mbytes_per_sec": 0 00:09:35.560 }, 00:09:35.560 "claimed": true, 00:09:35.560 "claim_type": "exclusive_write", 00:09:35.560 "zoned": false, 00:09:35.560 "supported_io_types": { 00:09:35.560 "read": true, 00:09:35.560 "write": true, 00:09:35.560 "unmap": true, 00:09:35.560 "flush": true, 00:09:35.560 "reset": true, 00:09:35.560 "nvme_admin": false, 00:09:35.560 "nvme_io": false, 00:09:35.560 "nvme_io_md": false, 00:09:35.560 "write_zeroes": true, 00:09:35.560 "zcopy": true, 00:09:35.560 "get_zone_info": false, 00:09:35.560 "zone_management": false, 00:09:35.560 "zone_append": false, 00:09:35.560 "compare": false, 00:09:35.560 "compare_and_write": false, 00:09:35.560 "abort": true, 00:09:35.560 "seek_hole": false, 00:09:35.560 "seek_data": false, 00:09:35.560 "copy": true, 00:09:35.560 "nvme_iov_md": false 00:09:35.560 }, 00:09:35.560 "memory_domains": [ 00:09:35.560 { 00:09:35.560 "dma_device_id": "system", 00:09:35.560 "dma_device_type": 1 00:09:35.560 }, 00:09:35.560 { 00:09:35.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.560 "dma_device_type": 2 00:09:35.560 } 00:09:35.560 ], 00:09:35.560 "driver_specific": {} 00:09:35.560 } 00:09:35.560 ] 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.560 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.820 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.820 "name": "Existed_Raid", 00:09:35.820 "uuid": "24c07857-6f24-49cb-9f32-4520ac593d55", 00:09:35.820 "strip_size_kb": 64, 00:09:35.820 "state": "configuring", 00:09:35.820 "raid_level": "concat", 00:09:35.820 "superblock": true, 00:09:35.820 "num_base_bdevs": 4, 00:09:35.820 "num_base_bdevs_discovered": 2, 00:09:35.820 "num_base_bdevs_operational": 4, 00:09:35.820 "base_bdevs_list": [ 00:09:35.820 { 00:09:35.820 "name": "BaseBdev1", 00:09:35.820 "uuid": "27bf582c-9883-4818-8c43-85a7a493f9f4", 00:09:35.820 "is_configured": true, 00:09:35.820 "data_offset": 2048, 00:09:35.820 "data_size": 63488 00:09:35.820 }, 00:09:35.820 { 00:09:35.820 "name": "BaseBdev2", 00:09:35.820 "uuid": "a0a0d208-8f12-46e5-88b2-cedde395cf76", 00:09:35.820 "is_configured": true, 00:09:35.820 "data_offset": 2048, 00:09:35.820 "data_size": 63488 00:09:35.820 }, 00:09:35.820 { 00:09:35.820 "name": "BaseBdev3", 00:09:35.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.820 "is_configured": false, 00:09:35.820 "data_offset": 0, 00:09:35.820 "data_size": 0 00:09:35.820 }, 00:09:35.820 { 00:09:35.820 "name": "BaseBdev4", 00:09:35.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.820 "is_configured": false, 00:09:35.820 "data_offset": 0, 00:09:35.820 "data_size": 0 00:09:35.820 } 00:09:35.820 ] 00:09:35.820 }' 00:09:35.820 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.820 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.079 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:36.079 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.079 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.079 [2024-12-06 11:52:33.673998] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:36.079 BaseBdev3 00:09:36.079 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.079 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:36.079 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:36.079 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:36.079 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:36.079 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:36.079 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:36.079 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:36.079 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.079 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.079 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.079 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:36.079 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.079 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.079 [ 00:09:36.079 { 00:09:36.079 "name": "BaseBdev3", 00:09:36.079 "aliases": [ 00:09:36.079 "688faf41-1342-406e-97ed-81a42cff1d5e" 00:09:36.079 ], 00:09:36.079 "product_name": "Malloc disk", 00:09:36.079 "block_size": 512, 00:09:36.079 "num_blocks": 65536, 00:09:36.079 "uuid": "688faf41-1342-406e-97ed-81a42cff1d5e", 00:09:36.079 "assigned_rate_limits": { 00:09:36.079 "rw_ios_per_sec": 0, 00:09:36.079 "rw_mbytes_per_sec": 0, 00:09:36.079 "r_mbytes_per_sec": 0, 00:09:36.079 "w_mbytes_per_sec": 0 00:09:36.079 }, 00:09:36.079 "claimed": true, 00:09:36.079 "claim_type": "exclusive_write", 00:09:36.079 "zoned": false, 00:09:36.079 "supported_io_types": { 00:09:36.079 "read": true, 00:09:36.079 "write": true, 00:09:36.079 "unmap": true, 00:09:36.079 "flush": true, 00:09:36.079 "reset": true, 00:09:36.079 "nvme_admin": false, 00:09:36.079 "nvme_io": false, 00:09:36.079 "nvme_io_md": false, 00:09:36.079 "write_zeroes": true, 00:09:36.079 "zcopy": true, 00:09:36.079 "get_zone_info": false, 00:09:36.080 "zone_management": false, 00:09:36.080 "zone_append": false, 00:09:36.080 "compare": false, 00:09:36.080 "compare_and_write": false, 00:09:36.080 "abort": true, 00:09:36.080 "seek_hole": false, 00:09:36.080 "seek_data": false, 00:09:36.080 "copy": true, 00:09:36.080 "nvme_iov_md": false 00:09:36.080 }, 00:09:36.080 "memory_domains": [ 00:09:36.080 { 00:09:36.080 "dma_device_id": "system", 00:09:36.080 "dma_device_type": 1 00:09:36.080 }, 00:09:36.080 { 00:09:36.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.080 "dma_device_type": 2 00:09:36.080 } 00:09:36.080 ], 00:09:36.080 "driver_specific": {} 00:09:36.080 } 00:09:36.080 ] 00:09:36.080 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.080 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:36.080 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:36.080 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:36.080 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:36.080 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.080 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.080 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.080 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.080 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:36.080 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.080 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.080 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.080 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.080 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.080 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.080 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.080 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.080 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.080 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.080 "name": "Existed_Raid", 00:09:36.080 "uuid": "24c07857-6f24-49cb-9f32-4520ac593d55", 00:09:36.080 "strip_size_kb": 64, 00:09:36.080 "state": "configuring", 00:09:36.080 "raid_level": "concat", 00:09:36.080 "superblock": true, 00:09:36.080 "num_base_bdevs": 4, 00:09:36.080 "num_base_bdevs_discovered": 3, 00:09:36.080 "num_base_bdevs_operational": 4, 00:09:36.080 "base_bdevs_list": [ 00:09:36.080 { 00:09:36.080 "name": "BaseBdev1", 00:09:36.080 "uuid": "27bf582c-9883-4818-8c43-85a7a493f9f4", 00:09:36.080 "is_configured": true, 00:09:36.080 "data_offset": 2048, 00:09:36.080 "data_size": 63488 00:09:36.080 }, 00:09:36.080 { 00:09:36.080 "name": "BaseBdev2", 00:09:36.080 "uuid": "a0a0d208-8f12-46e5-88b2-cedde395cf76", 00:09:36.080 "is_configured": true, 00:09:36.080 "data_offset": 2048, 00:09:36.080 "data_size": 63488 00:09:36.080 }, 00:09:36.080 { 00:09:36.080 "name": "BaseBdev3", 00:09:36.080 "uuid": "688faf41-1342-406e-97ed-81a42cff1d5e", 00:09:36.080 "is_configured": true, 00:09:36.080 "data_offset": 2048, 00:09:36.080 "data_size": 63488 00:09:36.080 }, 00:09:36.080 { 00:09:36.080 "name": "BaseBdev4", 00:09:36.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.080 "is_configured": false, 00:09:36.080 "data_offset": 0, 00:09:36.080 "data_size": 0 00:09:36.080 } 00:09:36.080 ] 00:09:36.080 }' 00:09:36.080 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.080 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.650 [2024-12-06 11:52:34.116181] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:36.650 [2024-12-06 11:52:34.116454] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:36.650 [2024-12-06 11:52:34.116505] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:36.650 [2024-12-06 11:52:34.116803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:36.650 BaseBdev4 00:09:36.650 [2024-12-06 11:52:34.116961] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:36.650 [2024-12-06 11:52:34.116984] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:36.650 [2024-12-06 11:52:34.117094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.650 [ 00:09:36.650 { 00:09:36.650 "name": "BaseBdev4", 00:09:36.650 "aliases": [ 00:09:36.650 "cb4dd577-22ca-4888-856c-e110f54174ff" 00:09:36.650 ], 00:09:36.650 "product_name": "Malloc disk", 00:09:36.650 "block_size": 512, 00:09:36.650 "num_blocks": 65536, 00:09:36.650 "uuid": "cb4dd577-22ca-4888-856c-e110f54174ff", 00:09:36.650 "assigned_rate_limits": { 00:09:36.650 "rw_ios_per_sec": 0, 00:09:36.650 "rw_mbytes_per_sec": 0, 00:09:36.650 "r_mbytes_per_sec": 0, 00:09:36.650 "w_mbytes_per_sec": 0 00:09:36.650 }, 00:09:36.650 "claimed": true, 00:09:36.650 "claim_type": "exclusive_write", 00:09:36.650 "zoned": false, 00:09:36.650 "supported_io_types": { 00:09:36.650 "read": true, 00:09:36.650 "write": true, 00:09:36.650 "unmap": true, 00:09:36.650 "flush": true, 00:09:36.650 "reset": true, 00:09:36.650 "nvme_admin": false, 00:09:36.650 "nvme_io": false, 00:09:36.650 "nvme_io_md": false, 00:09:36.650 "write_zeroes": true, 00:09:36.650 "zcopy": true, 00:09:36.650 "get_zone_info": false, 00:09:36.650 "zone_management": false, 00:09:36.650 "zone_append": false, 00:09:36.650 "compare": false, 00:09:36.650 "compare_and_write": false, 00:09:36.650 "abort": true, 00:09:36.650 "seek_hole": false, 00:09:36.650 "seek_data": false, 00:09:36.650 "copy": true, 00:09:36.650 "nvme_iov_md": false 00:09:36.650 }, 00:09:36.650 "memory_domains": [ 00:09:36.650 { 00:09:36.650 "dma_device_id": "system", 00:09:36.650 "dma_device_type": 1 00:09:36.650 }, 00:09:36.650 { 00:09:36.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.650 "dma_device_type": 2 00:09:36.650 } 00:09:36.650 ], 00:09:36.650 "driver_specific": {} 00:09:36.650 } 00:09:36.650 ] 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.650 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.650 "name": "Existed_Raid", 00:09:36.650 "uuid": "24c07857-6f24-49cb-9f32-4520ac593d55", 00:09:36.650 "strip_size_kb": 64, 00:09:36.650 "state": "online", 00:09:36.650 "raid_level": "concat", 00:09:36.650 "superblock": true, 00:09:36.650 "num_base_bdevs": 4, 00:09:36.650 "num_base_bdevs_discovered": 4, 00:09:36.650 "num_base_bdevs_operational": 4, 00:09:36.650 "base_bdevs_list": [ 00:09:36.650 { 00:09:36.650 "name": "BaseBdev1", 00:09:36.650 "uuid": "27bf582c-9883-4818-8c43-85a7a493f9f4", 00:09:36.650 "is_configured": true, 00:09:36.650 "data_offset": 2048, 00:09:36.650 "data_size": 63488 00:09:36.651 }, 00:09:36.651 { 00:09:36.651 "name": "BaseBdev2", 00:09:36.651 "uuid": "a0a0d208-8f12-46e5-88b2-cedde395cf76", 00:09:36.651 "is_configured": true, 00:09:36.651 "data_offset": 2048, 00:09:36.651 "data_size": 63488 00:09:36.651 }, 00:09:36.651 { 00:09:36.651 "name": "BaseBdev3", 00:09:36.651 "uuid": "688faf41-1342-406e-97ed-81a42cff1d5e", 00:09:36.651 "is_configured": true, 00:09:36.651 "data_offset": 2048, 00:09:36.651 "data_size": 63488 00:09:36.651 }, 00:09:36.651 { 00:09:36.651 "name": "BaseBdev4", 00:09:36.651 "uuid": "cb4dd577-22ca-4888-856c-e110f54174ff", 00:09:36.651 "is_configured": true, 00:09:36.651 "data_offset": 2048, 00:09:36.651 "data_size": 63488 00:09:36.651 } 00:09:36.651 ] 00:09:36.651 }' 00:09:36.651 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.651 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.922 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:36.922 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:36.922 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:36.922 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:36.922 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:36.922 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:36.922 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:36.922 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:36.922 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.922 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.922 [2024-12-06 11:52:34.563717] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.922 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.922 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:36.922 "name": "Existed_Raid", 00:09:36.922 "aliases": [ 00:09:36.922 "24c07857-6f24-49cb-9f32-4520ac593d55" 00:09:36.922 ], 00:09:36.922 "product_name": "Raid Volume", 00:09:36.922 "block_size": 512, 00:09:36.922 "num_blocks": 253952, 00:09:36.922 "uuid": "24c07857-6f24-49cb-9f32-4520ac593d55", 00:09:36.922 "assigned_rate_limits": { 00:09:36.922 "rw_ios_per_sec": 0, 00:09:36.922 "rw_mbytes_per_sec": 0, 00:09:36.922 "r_mbytes_per_sec": 0, 00:09:36.922 "w_mbytes_per_sec": 0 00:09:36.922 }, 00:09:36.922 "claimed": false, 00:09:36.922 "zoned": false, 00:09:36.922 "supported_io_types": { 00:09:36.922 "read": true, 00:09:36.922 "write": true, 00:09:36.922 "unmap": true, 00:09:36.922 "flush": true, 00:09:36.922 "reset": true, 00:09:36.922 "nvme_admin": false, 00:09:36.922 "nvme_io": false, 00:09:36.922 "nvme_io_md": false, 00:09:36.922 "write_zeroes": true, 00:09:36.922 "zcopy": false, 00:09:36.922 "get_zone_info": false, 00:09:36.922 "zone_management": false, 00:09:36.922 "zone_append": false, 00:09:36.922 "compare": false, 00:09:36.922 "compare_and_write": false, 00:09:36.922 "abort": false, 00:09:36.922 "seek_hole": false, 00:09:36.922 "seek_data": false, 00:09:36.922 "copy": false, 00:09:36.922 "nvme_iov_md": false 00:09:36.922 }, 00:09:36.922 "memory_domains": [ 00:09:36.922 { 00:09:36.922 "dma_device_id": "system", 00:09:36.922 "dma_device_type": 1 00:09:36.922 }, 00:09:36.922 { 00:09:36.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.922 "dma_device_type": 2 00:09:36.922 }, 00:09:36.922 { 00:09:36.922 "dma_device_id": "system", 00:09:36.922 "dma_device_type": 1 00:09:36.922 }, 00:09:36.922 { 00:09:36.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.922 "dma_device_type": 2 00:09:36.922 }, 00:09:36.922 { 00:09:36.922 "dma_device_id": "system", 00:09:36.922 "dma_device_type": 1 00:09:36.922 }, 00:09:36.922 { 00:09:36.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.922 "dma_device_type": 2 00:09:36.922 }, 00:09:36.922 { 00:09:36.922 "dma_device_id": "system", 00:09:36.922 "dma_device_type": 1 00:09:36.922 }, 00:09:36.922 { 00:09:36.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.922 "dma_device_type": 2 00:09:36.922 } 00:09:36.922 ], 00:09:36.922 "driver_specific": { 00:09:36.922 "raid": { 00:09:36.922 "uuid": "24c07857-6f24-49cb-9f32-4520ac593d55", 00:09:36.922 "strip_size_kb": 64, 00:09:36.922 "state": "online", 00:09:36.922 "raid_level": "concat", 00:09:36.922 "superblock": true, 00:09:36.922 "num_base_bdevs": 4, 00:09:36.922 "num_base_bdevs_discovered": 4, 00:09:36.922 "num_base_bdevs_operational": 4, 00:09:36.922 "base_bdevs_list": [ 00:09:36.922 { 00:09:36.922 "name": "BaseBdev1", 00:09:36.922 "uuid": "27bf582c-9883-4818-8c43-85a7a493f9f4", 00:09:36.922 "is_configured": true, 00:09:36.922 "data_offset": 2048, 00:09:36.922 "data_size": 63488 00:09:36.922 }, 00:09:36.922 { 00:09:36.922 "name": "BaseBdev2", 00:09:36.922 "uuid": "a0a0d208-8f12-46e5-88b2-cedde395cf76", 00:09:36.922 "is_configured": true, 00:09:36.922 "data_offset": 2048, 00:09:36.922 "data_size": 63488 00:09:36.922 }, 00:09:36.922 { 00:09:36.922 "name": "BaseBdev3", 00:09:36.922 "uuid": "688faf41-1342-406e-97ed-81a42cff1d5e", 00:09:36.922 "is_configured": true, 00:09:36.922 "data_offset": 2048, 00:09:36.922 "data_size": 63488 00:09:36.922 }, 00:09:36.922 { 00:09:36.922 "name": "BaseBdev4", 00:09:36.922 "uuid": "cb4dd577-22ca-4888-856c-e110f54174ff", 00:09:36.922 "is_configured": true, 00:09:36.922 "data_offset": 2048, 00:09:36.922 "data_size": 63488 00:09:36.922 } 00:09:36.922 ] 00:09:36.922 } 00:09:36.922 } 00:09:36.922 }' 00:09:36.922 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:36.923 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:36.923 BaseBdev2 00:09:36.923 BaseBdev3 00:09:36.923 BaseBdev4' 00:09:36.923 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.923 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:36.923 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.923 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.923 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:36.923 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.923 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.182 [2024-12-06 11:52:34.835015] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:37.182 [2024-12-06 11:52:34.835082] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:37.182 [2024-12-06 11:52:34.835146] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.182 "name": "Existed_Raid", 00:09:37.182 "uuid": "24c07857-6f24-49cb-9f32-4520ac593d55", 00:09:37.182 "strip_size_kb": 64, 00:09:37.182 "state": "offline", 00:09:37.182 "raid_level": "concat", 00:09:37.182 "superblock": true, 00:09:37.182 "num_base_bdevs": 4, 00:09:37.182 "num_base_bdevs_discovered": 3, 00:09:37.182 "num_base_bdevs_operational": 3, 00:09:37.182 "base_bdevs_list": [ 00:09:37.182 { 00:09:37.182 "name": null, 00:09:37.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.182 "is_configured": false, 00:09:37.182 "data_offset": 0, 00:09:37.182 "data_size": 63488 00:09:37.182 }, 00:09:37.182 { 00:09:37.182 "name": "BaseBdev2", 00:09:37.182 "uuid": "a0a0d208-8f12-46e5-88b2-cedde395cf76", 00:09:37.182 "is_configured": true, 00:09:37.182 "data_offset": 2048, 00:09:37.182 "data_size": 63488 00:09:37.182 }, 00:09:37.182 { 00:09:37.182 "name": "BaseBdev3", 00:09:37.182 "uuid": "688faf41-1342-406e-97ed-81a42cff1d5e", 00:09:37.182 "is_configured": true, 00:09:37.182 "data_offset": 2048, 00:09:37.182 "data_size": 63488 00:09:37.182 }, 00:09:37.182 { 00:09:37.182 "name": "BaseBdev4", 00:09:37.182 "uuid": "cb4dd577-22ca-4888-856c-e110f54174ff", 00:09:37.182 "is_configured": true, 00:09:37.182 "data_offset": 2048, 00:09:37.182 "data_size": 63488 00:09:37.182 } 00:09:37.182 ] 00:09:37.182 }' 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.182 11:52:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.748 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:37.748 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:37.748 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:37.748 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.748 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.748 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.748 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.749 [2024-12-06 11:52:35.345365] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.749 [2024-12-06 11:52:35.396411] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.749 [2024-12-06 11:52:35.467353] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:37.749 [2024-12-06 11:52:35.467407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:37.749 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.008 BaseBdev2 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.008 [ 00:09:38.008 { 00:09:38.008 "name": "BaseBdev2", 00:09:38.008 "aliases": [ 00:09:38.008 "dd1e0e80-5dc1-487e-be6b-be2be5ae6bfb" 00:09:38.008 ], 00:09:38.008 "product_name": "Malloc disk", 00:09:38.008 "block_size": 512, 00:09:38.008 "num_blocks": 65536, 00:09:38.008 "uuid": "dd1e0e80-5dc1-487e-be6b-be2be5ae6bfb", 00:09:38.008 "assigned_rate_limits": { 00:09:38.008 "rw_ios_per_sec": 0, 00:09:38.008 "rw_mbytes_per_sec": 0, 00:09:38.008 "r_mbytes_per_sec": 0, 00:09:38.008 "w_mbytes_per_sec": 0 00:09:38.008 }, 00:09:38.008 "claimed": false, 00:09:38.008 "zoned": false, 00:09:38.008 "supported_io_types": { 00:09:38.008 "read": true, 00:09:38.008 "write": true, 00:09:38.008 "unmap": true, 00:09:38.008 "flush": true, 00:09:38.008 "reset": true, 00:09:38.008 "nvme_admin": false, 00:09:38.008 "nvme_io": false, 00:09:38.008 "nvme_io_md": false, 00:09:38.008 "write_zeroes": true, 00:09:38.008 "zcopy": true, 00:09:38.008 "get_zone_info": false, 00:09:38.008 "zone_management": false, 00:09:38.008 "zone_append": false, 00:09:38.008 "compare": false, 00:09:38.008 "compare_and_write": false, 00:09:38.008 "abort": true, 00:09:38.008 "seek_hole": false, 00:09:38.008 "seek_data": false, 00:09:38.008 "copy": true, 00:09:38.008 "nvme_iov_md": false 00:09:38.008 }, 00:09:38.008 "memory_domains": [ 00:09:38.008 { 00:09:38.008 "dma_device_id": "system", 00:09:38.008 "dma_device_type": 1 00:09:38.008 }, 00:09:38.008 { 00:09:38.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.008 "dma_device_type": 2 00:09:38.008 } 00:09:38.008 ], 00:09:38.008 "driver_specific": {} 00:09:38.008 } 00:09:38.008 ] 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.008 BaseBdev3 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.008 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.008 [ 00:09:38.008 { 00:09:38.008 "name": "BaseBdev3", 00:09:38.008 "aliases": [ 00:09:38.008 "cb9b9c7d-5cde-4e23-a707-a8fafbbbb057" 00:09:38.008 ], 00:09:38.008 "product_name": "Malloc disk", 00:09:38.008 "block_size": 512, 00:09:38.008 "num_blocks": 65536, 00:09:38.008 "uuid": "cb9b9c7d-5cde-4e23-a707-a8fafbbbb057", 00:09:38.008 "assigned_rate_limits": { 00:09:38.008 "rw_ios_per_sec": 0, 00:09:38.008 "rw_mbytes_per_sec": 0, 00:09:38.008 "r_mbytes_per_sec": 0, 00:09:38.008 "w_mbytes_per_sec": 0 00:09:38.008 }, 00:09:38.008 "claimed": false, 00:09:38.008 "zoned": false, 00:09:38.008 "supported_io_types": { 00:09:38.008 "read": true, 00:09:38.008 "write": true, 00:09:38.008 "unmap": true, 00:09:38.008 "flush": true, 00:09:38.008 "reset": true, 00:09:38.008 "nvme_admin": false, 00:09:38.008 "nvme_io": false, 00:09:38.008 "nvme_io_md": false, 00:09:38.008 "write_zeroes": true, 00:09:38.008 "zcopy": true, 00:09:38.008 "get_zone_info": false, 00:09:38.008 "zone_management": false, 00:09:38.008 "zone_append": false, 00:09:38.008 "compare": false, 00:09:38.009 "compare_and_write": false, 00:09:38.009 "abort": true, 00:09:38.009 "seek_hole": false, 00:09:38.009 "seek_data": false, 00:09:38.009 "copy": true, 00:09:38.009 "nvme_iov_md": false 00:09:38.009 }, 00:09:38.009 "memory_domains": [ 00:09:38.009 { 00:09:38.009 "dma_device_id": "system", 00:09:38.009 "dma_device_type": 1 00:09:38.009 }, 00:09:38.009 { 00:09:38.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.009 "dma_device_type": 2 00:09:38.009 } 00:09:38.009 ], 00:09:38.009 "driver_specific": {} 00:09:38.009 } 00:09:38.009 ] 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.009 BaseBdev4 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.009 [ 00:09:38.009 { 00:09:38.009 "name": "BaseBdev4", 00:09:38.009 "aliases": [ 00:09:38.009 "758d5d00-f54d-4a87-b99f-a083b158e29e" 00:09:38.009 ], 00:09:38.009 "product_name": "Malloc disk", 00:09:38.009 "block_size": 512, 00:09:38.009 "num_blocks": 65536, 00:09:38.009 "uuid": "758d5d00-f54d-4a87-b99f-a083b158e29e", 00:09:38.009 "assigned_rate_limits": { 00:09:38.009 "rw_ios_per_sec": 0, 00:09:38.009 "rw_mbytes_per_sec": 0, 00:09:38.009 "r_mbytes_per_sec": 0, 00:09:38.009 "w_mbytes_per_sec": 0 00:09:38.009 }, 00:09:38.009 "claimed": false, 00:09:38.009 "zoned": false, 00:09:38.009 "supported_io_types": { 00:09:38.009 "read": true, 00:09:38.009 "write": true, 00:09:38.009 "unmap": true, 00:09:38.009 "flush": true, 00:09:38.009 "reset": true, 00:09:38.009 "nvme_admin": false, 00:09:38.009 "nvme_io": false, 00:09:38.009 "nvme_io_md": false, 00:09:38.009 "write_zeroes": true, 00:09:38.009 "zcopy": true, 00:09:38.009 "get_zone_info": false, 00:09:38.009 "zone_management": false, 00:09:38.009 "zone_append": false, 00:09:38.009 "compare": false, 00:09:38.009 "compare_and_write": false, 00:09:38.009 "abort": true, 00:09:38.009 "seek_hole": false, 00:09:38.009 "seek_data": false, 00:09:38.009 "copy": true, 00:09:38.009 "nvme_iov_md": false 00:09:38.009 }, 00:09:38.009 "memory_domains": [ 00:09:38.009 { 00:09:38.009 "dma_device_id": "system", 00:09:38.009 "dma_device_type": 1 00:09:38.009 }, 00:09:38.009 { 00:09:38.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.009 "dma_device_type": 2 00:09:38.009 } 00:09:38.009 ], 00:09:38.009 "driver_specific": {} 00:09:38.009 } 00:09:38.009 ] 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.009 [2024-12-06 11:52:35.698236] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:38.009 [2024-12-06 11:52:35.698330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:38.009 [2024-12-06 11:52:35.698377] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.009 [2024-12-06 11:52:35.700139] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:38.009 [2024-12-06 11:52:35.700225] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.009 "name": "Existed_Raid", 00:09:38.009 "uuid": "18f04ff8-ea7b-4a39-9754-94192586f6c7", 00:09:38.009 "strip_size_kb": 64, 00:09:38.009 "state": "configuring", 00:09:38.009 "raid_level": "concat", 00:09:38.009 "superblock": true, 00:09:38.009 "num_base_bdevs": 4, 00:09:38.009 "num_base_bdevs_discovered": 3, 00:09:38.009 "num_base_bdevs_operational": 4, 00:09:38.009 "base_bdevs_list": [ 00:09:38.009 { 00:09:38.009 "name": "BaseBdev1", 00:09:38.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.009 "is_configured": false, 00:09:38.009 "data_offset": 0, 00:09:38.009 "data_size": 0 00:09:38.009 }, 00:09:38.009 { 00:09:38.009 "name": "BaseBdev2", 00:09:38.009 "uuid": "dd1e0e80-5dc1-487e-be6b-be2be5ae6bfb", 00:09:38.009 "is_configured": true, 00:09:38.009 "data_offset": 2048, 00:09:38.009 "data_size": 63488 00:09:38.009 }, 00:09:38.009 { 00:09:38.009 "name": "BaseBdev3", 00:09:38.009 "uuid": "cb9b9c7d-5cde-4e23-a707-a8fafbbbb057", 00:09:38.009 "is_configured": true, 00:09:38.009 "data_offset": 2048, 00:09:38.009 "data_size": 63488 00:09:38.009 }, 00:09:38.009 { 00:09:38.009 "name": "BaseBdev4", 00:09:38.009 "uuid": "758d5d00-f54d-4a87-b99f-a083b158e29e", 00:09:38.009 "is_configured": true, 00:09:38.009 "data_offset": 2048, 00:09:38.009 "data_size": 63488 00:09:38.009 } 00:09:38.009 ] 00:09:38.009 }' 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.009 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.575 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:38.575 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.575 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.575 [2024-12-06 11:52:36.089545] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:38.575 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.575 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:38.575 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.575 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.575 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.575 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.575 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:38.575 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.575 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.575 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.575 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.575 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.575 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.575 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.575 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.576 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.576 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.576 "name": "Existed_Raid", 00:09:38.576 "uuid": "18f04ff8-ea7b-4a39-9754-94192586f6c7", 00:09:38.576 "strip_size_kb": 64, 00:09:38.576 "state": "configuring", 00:09:38.576 "raid_level": "concat", 00:09:38.576 "superblock": true, 00:09:38.576 "num_base_bdevs": 4, 00:09:38.576 "num_base_bdevs_discovered": 2, 00:09:38.576 "num_base_bdevs_operational": 4, 00:09:38.576 "base_bdevs_list": [ 00:09:38.576 { 00:09:38.576 "name": "BaseBdev1", 00:09:38.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.576 "is_configured": false, 00:09:38.576 "data_offset": 0, 00:09:38.576 "data_size": 0 00:09:38.576 }, 00:09:38.576 { 00:09:38.576 "name": null, 00:09:38.576 "uuid": "dd1e0e80-5dc1-487e-be6b-be2be5ae6bfb", 00:09:38.576 "is_configured": false, 00:09:38.576 "data_offset": 0, 00:09:38.576 "data_size": 63488 00:09:38.576 }, 00:09:38.576 { 00:09:38.576 "name": "BaseBdev3", 00:09:38.576 "uuid": "cb9b9c7d-5cde-4e23-a707-a8fafbbbb057", 00:09:38.576 "is_configured": true, 00:09:38.576 "data_offset": 2048, 00:09:38.576 "data_size": 63488 00:09:38.576 }, 00:09:38.576 { 00:09:38.576 "name": "BaseBdev4", 00:09:38.576 "uuid": "758d5d00-f54d-4a87-b99f-a083b158e29e", 00:09:38.576 "is_configured": true, 00:09:38.576 "data_offset": 2048, 00:09:38.576 "data_size": 63488 00:09:38.576 } 00:09:38.576 ] 00:09:38.576 }' 00:09:38.576 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.576 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.835 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.835 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:38.835 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.835 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.835 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.835 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:38.835 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:38.835 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.835 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.835 [2024-12-06 11:52:36.571654] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.835 BaseBdev1 00:09:38.835 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.835 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:38.835 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:38.835 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:38.835 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:38.835 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:38.835 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:38.835 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:38.835 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.835 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.835 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.835 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:38.835 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.835 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.096 [ 00:09:39.096 { 00:09:39.096 "name": "BaseBdev1", 00:09:39.096 "aliases": [ 00:09:39.096 "f96e5f7f-75a1-49aa-ba6d-ec61eab046e8" 00:09:39.096 ], 00:09:39.096 "product_name": "Malloc disk", 00:09:39.096 "block_size": 512, 00:09:39.096 "num_blocks": 65536, 00:09:39.096 "uuid": "f96e5f7f-75a1-49aa-ba6d-ec61eab046e8", 00:09:39.096 "assigned_rate_limits": { 00:09:39.096 "rw_ios_per_sec": 0, 00:09:39.096 "rw_mbytes_per_sec": 0, 00:09:39.096 "r_mbytes_per_sec": 0, 00:09:39.096 "w_mbytes_per_sec": 0 00:09:39.096 }, 00:09:39.096 "claimed": true, 00:09:39.096 "claim_type": "exclusive_write", 00:09:39.096 "zoned": false, 00:09:39.096 "supported_io_types": { 00:09:39.096 "read": true, 00:09:39.096 "write": true, 00:09:39.096 "unmap": true, 00:09:39.096 "flush": true, 00:09:39.096 "reset": true, 00:09:39.096 "nvme_admin": false, 00:09:39.096 "nvme_io": false, 00:09:39.096 "nvme_io_md": false, 00:09:39.096 "write_zeroes": true, 00:09:39.096 "zcopy": true, 00:09:39.096 "get_zone_info": false, 00:09:39.096 "zone_management": false, 00:09:39.096 "zone_append": false, 00:09:39.096 "compare": false, 00:09:39.096 "compare_and_write": false, 00:09:39.096 "abort": true, 00:09:39.096 "seek_hole": false, 00:09:39.096 "seek_data": false, 00:09:39.096 "copy": true, 00:09:39.096 "nvme_iov_md": false 00:09:39.096 }, 00:09:39.096 "memory_domains": [ 00:09:39.096 { 00:09:39.096 "dma_device_id": "system", 00:09:39.096 "dma_device_type": 1 00:09:39.096 }, 00:09:39.096 { 00:09:39.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.096 "dma_device_type": 2 00:09:39.096 } 00:09:39.096 ], 00:09:39.096 "driver_specific": {} 00:09:39.096 } 00:09:39.096 ] 00:09:39.096 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.096 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:39.096 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:39.096 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.096 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.096 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.096 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.096 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:39.096 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.096 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.096 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.096 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.096 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.096 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.096 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.096 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.096 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.096 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.096 "name": "Existed_Raid", 00:09:39.096 "uuid": "18f04ff8-ea7b-4a39-9754-94192586f6c7", 00:09:39.096 "strip_size_kb": 64, 00:09:39.096 "state": "configuring", 00:09:39.096 "raid_level": "concat", 00:09:39.096 "superblock": true, 00:09:39.096 "num_base_bdevs": 4, 00:09:39.096 "num_base_bdevs_discovered": 3, 00:09:39.096 "num_base_bdevs_operational": 4, 00:09:39.096 "base_bdevs_list": [ 00:09:39.096 { 00:09:39.096 "name": "BaseBdev1", 00:09:39.096 "uuid": "f96e5f7f-75a1-49aa-ba6d-ec61eab046e8", 00:09:39.096 "is_configured": true, 00:09:39.096 "data_offset": 2048, 00:09:39.096 "data_size": 63488 00:09:39.096 }, 00:09:39.096 { 00:09:39.096 "name": null, 00:09:39.096 "uuid": "dd1e0e80-5dc1-487e-be6b-be2be5ae6bfb", 00:09:39.096 "is_configured": false, 00:09:39.096 "data_offset": 0, 00:09:39.096 "data_size": 63488 00:09:39.096 }, 00:09:39.096 { 00:09:39.096 "name": "BaseBdev3", 00:09:39.096 "uuid": "cb9b9c7d-5cde-4e23-a707-a8fafbbbb057", 00:09:39.096 "is_configured": true, 00:09:39.096 "data_offset": 2048, 00:09:39.096 "data_size": 63488 00:09:39.096 }, 00:09:39.096 { 00:09:39.096 "name": "BaseBdev4", 00:09:39.096 "uuid": "758d5d00-f54d-4a87-b99f-a083b158e29e", 00:09:39.096 "is_configured": true, 00:09:39.096 "data_offset": 2048, 00:09:39.096 "data_size": 63488 00:09:39.096 } 00:09:39.096 ] 00:09:39.096 }' 00:09:39.096 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.096 11:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.365 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:39.365 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.365 11:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.365 11:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.365 11:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.365 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:39.365 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:39.365 11:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.365 11:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.365 [2024-12-06 11:52:37.086814] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:39.365 11:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.365 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:39.365 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.365 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.365 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.365 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.365 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:39.365 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.365 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.365 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.365 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.365 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.365 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.365 11:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.365 11:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.625 11:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.625 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.625 "name": "Existed_Raid", 00:09:39.625 "uuid": "18f04ff8-ea7b-4a39-9754-94192586f6c7", 00:09:39.625 "strip_size_kb": 64, 00:09:39.625 "state": "configuring", 00:09:39.625 "raid_level": "concat", 00:09:39.625 "superblock": true, 00:09:39.625 "num_base_bdevs": 4, 00:09:39.625 "num_base_bdevs_discovered": 2, 00:09:39.625 "num_base_bdevs_operational": 4, 00:09:39.625 "base_bdevs_list": [ 00:09:39.625 { 00:09:39.625 "name": "BaseBdev1", 00:09:39.625 "uuid": "f96e5f7f-75a1-49aa-ba6d-ec61eab046e8", 00:09:39.625 "is_configured": true, 00:09:39.625 "data_offset": 2048, 00:09:39.625 "data_size": 63488 00:09:39.625 }, 00:09:39.625 { 00:09:39.625 "name": null, 00:09:39.625 "uuid": "dd1e0e80-5dc1-487e-be6b-be2be5ae6bfb", 00:09:39.625 "is_configured": false, 00:09:39.625 "data_offset": 0, 00:09:39.625 "data_size": 63488 00:09:39.625 }, 00:09:39.625 { 00:09:39.625 "name": null, 00:09:39.625 "uuid": "cb9b9c7d-5cde-4e23-a707-a8fafbbbb057", 00:09:39.625 "is_configured": false, 00:09:39.625 "data_offset": 0, 00:09:39.625 "data_size": 63488 00:09:39.625 }, 00:09:39.625 { 00:09:39.625 "name": "BaseBdev4", 00:09:39.625 "uuid": "758d5d00-f54d-4a87-b99f-a083b158e29e", 00:09:39.625 "is_configured": true, 00:09:39.625 "data_offset": 2048, 00:09:39.625 "data_size": 63488 00:09:39.625 } 00:09:39.625 ] 00:09:39.625 }' 00:09:39.625 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.625 11:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.929 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.929 11:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.930 11:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.930 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:39.930 11:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.930 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:39.930 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:39.930 11:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.930 11:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.930 [2024-12-06 11:52:37.562059] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:39.930 11:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.930 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:39.930 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.930 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.930 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.930 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.930 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:39.930 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.930 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.930 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.930 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.930 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.930 11:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.930 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.930 11:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.930 11:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.930 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.930 "name": "Existed_Raid", 00:09:39.930 "uuid": "18f04ff8-ea7b-4a39-9754-94192586f6c7", 00:09:39.930 "strip_size_kb": 64, 00:09:39.930 "state": "configuring", 00:09:39.930 "raid_level": "concat", 00:09:39.930 "superblock": true, 00:09:39.930 "num_base_bdevs": 4, 00:09:39.930 "num_base_bdevs_discovered": 3, 00:09:39.930 "num_base_bdevs_operational": 4, 00:09:39.930 "base_bdevs_list": [ 00:09:39.930 { 00:09:39.930 "name": "BaseBdev1", 00:09:39.930 "uuid": "f96e5f7f-75a1-49aa-ba6d-ec61eab046e8", 00:09:39.930 "is_configured": true, 00:09:39.930 "data_offset": 2048, 00:09:39.930 "data_size": 63488 00:09:39.930 }, 00:09:39.930 { 00:09:39.930 "name": null, 00:09:39.930 "uuid": "dd1e0e80-5dc1-487e-be6b-be2be5ae6bfb", 00:09:39.930 "is_configured": false, 00:09:39.930 "data_offset": 0, 00:09:39.930 "data_size": 63488 00:09:39.930 }, 00:09:39.930 { 00:09:39.930 "name": "BaseBdev3", 00:09:39.930 "uuid": "cb9b9c7d-5cde-4e23-a707-a8fafbbbb057", 00:09:39.930 "is_configured": true, 00:09:39.930 "data_offset": 2048, 00:09:39.930 "data_size": 63488 00:09:39.930 }, 00:09:39.930 { 00:09:39.930 "name": "BaseBdev4", 00:09:39.930 "uuid": "758d5d00-f54d-4a87-b99f-a083b158e29e", 00:09:39.930 "is_configured": true, 00:09:39.930 "data_offset": 2048, 00:09:39.930 "data_size": 63488 00:09:39.930 } 00:09:39.930 ] 00:09:39.930 }' 00:09:39.930 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.930 11:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.501 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.501 11:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.501 11:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.501 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:40.501 11:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.501 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:40.501 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:40.501 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.501 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.501 [2024-12-06 11:52:38.025253] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:40.501 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.501 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:40.501 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.501 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.501 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.501 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.501 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:40.501 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.501 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.501 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.501 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.501 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.501 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.501 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.501 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.501 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.501 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.501 "name": "Existed_Raid", 00:09:40.501 "uuid": "18f04ff8-ea7b-4a39-9754-94192586f6c7", 00:09:40.501 "strip_size_kb": 64, 00:09:40.501 "state": "configuring", 00:09:40.501 "raid_level": "concat", 00:09:40.501 "superblock": true, 00:09:40.501 "num_base_bdevs": 4, 00:09:40.501 "num_base_bdevs_discovered": 2, 00:09:40.501 "num_base_bdevs_operational": 4, 00:09:40.501 "base_bdevs_list": [ 00:09:40.501 { 00:09:40.501 "name": null, 00:09:40.501 "uuid": "f96e5f7f-75a1-49aa-ba6d-ec61eab046e8", 00:09:40.501 "is_configured": false, 00:09:40.501 "data_offset": 0, 00:09:40.501 "data_size": 63488 00:09:40.501 }, 00:09:40.501 { 00:09:40.501 "name": null, 00:09:40.501 "uuid": "dd1e0e80-5dc1-487e-be6b-be2be5ae6bfb", 00:09:40.501 "is_configured": false, 00:09:40.501 "data_offset": 0, 00:09:40.501 "data_size": 63488 00:09:40.501 }, 00:09:40.501 { 00:09:40.501 "name": "BaseBdev3", 00:09:40.501 "uuid": "cb9b9c7d-5cde-4e23-a707-a8fafbbbb057", 00:09:40.501 "is_configured": true, 00:09:40.501 "data_offset": 2048, 00:09:40.501 "data_size": 63488 00:09:40.501 }, 00:09:40.501 { 00:09:40.501 "name": "BaseBdev4", 00:09:40.501 "uuid": "758d5d00-f54d-4a87-b99f-a083b158e29e", 00:09:40.501 "is_configured": true, 00:09:40.501 "data_offset": 2048, 00:09:40.501 "data_size": 63488 00:09:40.501 } 00:09:40.501 ] 00:09:40.501 }' 00:09:40.501 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.501 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.762 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.762 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.762 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:40.762 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.762 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.762 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:40.762 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:40.762 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.762 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.762 [2024-12-06 11:52:38.438863] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.762 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.762 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:40.762 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.762 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.762 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.762 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.762 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:40.762 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.762 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.762 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.762 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.762 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.762 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.762 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.762 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.762 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.762 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.762 "name": "Existed_Raid", 00:09:40.762 "uuid": "18f04ff8-ea7b-4a39-9754-94192586f6c7", 00:09:40.762 "strip_size_kb": 64, 00:09:40.762 "state": "configuring", 00:09:40.762 "raid_level": "concat", 00:09:40.762 "superblock": true, 00:09:40.762 "num_base_bdevs": 4, 00:09:40.762 "num_base_bdevs_discovered": 3, 00:09:40.762 "num_base_bdevs_operational": 4, 00:09:40.762 "base_bdevs_list": [ 00:09:40.762 { 00:09:40.762 "name": null, 00:09:40.762 "uuid": "f96e5f7f-75a1-49aa-ba6d-ec61eab046e8", 00:09:40.762 "is_configured": false, 00:09:40.762 "data_offset": 0, 00:09:40.762 "data_size": 63488 00:09:40.762 }, 00:09:40.762 { 00:09:40.762 "name": "BaseBdev2", 00:09:40.762 "uuid": "dd1e0e80-5dc1-487e-be6b-be2be5ae6bfb", 00:09:40.762 "is_configured": true, 00:09:40.762 "data_offset": 2048, 00:09:40.762 "data_size": 63488 00:09:40.762 }, 00:09:40.762 { 00:09:40.762 "name": "BaseBdev3", 00:09:40.762 "uuid": "cb9b9c7d-5cde-4e23-a707-a8fafbbbb057", 00:09:40.762 "is_configured": true, 00:09:40.762 "data_offset": 2048, 00:09:40.762 "data_size": 63488 00:09:40.762 }, 00:09:40.762 { 00:09:40.762 "name": "BaseBdev4", 00:09:40.762 "uuid": "758d5d00-f54d-4a87-b99f-a083b158e29e", 00:09:40.762 "is_configured": true, 00:09:40.762 "data_offset": 2048, 00:09:40.762 "data_size": 63488 00:09:40.762 } 00:09:40.762 ] 00:09:40.762 }' 00:09:40.762 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.762 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f96e5f7f-75a1-49aa-ba6d-ec61eab046e8 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.334 [2024-12-06 11:52:38.900735] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:41.334 [2024-12-06 11:52:38.900904] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:41.334 [2024-12-06 11:52:38.900917] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:41.334 NewBaseBdev 00:09:41.334 [2024-12-06 11:52:38.901161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:09:41.334 [2024-12-06 11:52:38.901279] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:41.334 [2024-12-06 11:52:38.901298] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:41.334 [2024-12-06 11:52:38.901389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.334 [ 00:09:41.334 { 00:09:41.334 "name": "NewBaseBdev", 00:09:41.334 "aliases": [ 00:09:41.334 "f96e5f7f-75a1-49aa-ba6d-ec61eab046e8" 00:09:41.334 ], 00:09:41.334 "product_name": "Malloc disk", 00:09:41.334 "block_size": 512, 00:09:41.334 "num_blocks": 65536, 00:09:41.334 "uuid": "f96e5f7f-75a1-49aa-ba6d-ec61eab046e8", 00:09:41.334 "assigned_rate_limits": { 00:09:41.334 "rw_ios_per_sec": 0, 00:09:41.334 "rw_mbytes_per_sec": 0, 00:09:41.334 "r_mbytes_per_sec": 0, 00:09:41.334 "w_mbytes_per_sec": 0 00:09:41.334 }, 00:09:41.334 "claimed": true, 00:09:41.334 "claim_type": "exclusive_write", 00:09:41.334 "zoned": false, 00:09:41.334 "supported_io_types": { 00:09:41.334 "read": true, 00:09:41.334 "write": true, 00:09:41.334 "unmap": true, 00:09:41.334 "flush": true, 00:09:41.334 "reset": true, 00:09:41.334 "nvme_admin": false, 00:09:41.334 "nvme_io": false, 00:09:41.334 "nvme_io_md": false, 00:09:41.334 "write_zeroes": true, 00:09:41.334 "zcopy": true, 00:09:41.334 "get_zone_info": false, 00:09:41.334 "zone_management": false, 00:09:41.334 "zone_append": false, 00:09:41.334 "compare": false, 00:09:41.334 "compare_and_write": false, 00:09:41.334 "abort": true, 00:09:41.334 "seek_hole": false, 00:09:41.334 "seek_data": false, 00:09:41.334 "copy": true, 00:09:41.334 "nvme_iov_md": false 00:09:41.334 }, 00:09:41.334 "memory_domains": [ 00:09:41.334 { 00:09:41.334 "dma_device_id": "system", 00:09:41.334 "dma_device_type": 1 00:09:41.334 }, 00:09:41.334 { 00:09:41.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.334 "dma_device_type": 2 00:09:41.334 } 00:09:41.334 ], 00:09:41.334 "driver_specific": {} 00:09:41.334 } 00:09:41.334 ] 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.334 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.335 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.335 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.335 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.335 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.335 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.335 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.335 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.335 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.335 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.335 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.335 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.335 "name": "Existed_Raid", 00:09:41.335 "uuid": "18f04ff8-ea7b-4a39-9754-94192586f6c7", 00:09:41.335 "strip_size_kb": 64, 00:09:41.335 "state": "online", 00:09:41.335 "raid_level": "concat", 00:09:41.335 "superblock": true, 00:09:41.335 "num_base_bdevs": 4, 00:09:41.335 "num_base_bdevs_discovered": 4, 00:09:41.335 "num_base_bdevs_operational": 4, 00:09:41.335 "base_bdevs_list": [ 00:09:41.335 { 00:09:41.335 "name": "NewBaseBdev", 00:09:41.335 "uuid": "f96e5f7f-75a1-49aa-ba6d-ec61eab046e8", 00:09:41.335 "is_configured": true, 00:09:41.335 "data_offset": 2048, 00:09:41.335 "data_size": 63488 00:09:41.335 }, 00:09:41.335 { 00:09:41.335 "name": "BaseBdev2", 00:09:41.335 "uuid": "dd1e0e80-5dc1-487e-be6b-be2be5ae6bfb", 00:09:41.335 "is_configured": true, 00:09:41.335 "data_offset": 2048, 00:09:41.335 "data_size": 63488 00:09:41.335 }, 00:09:41.335 { 00:09:41.335 "name": "BaseBdev3", 00:09:41.335 "uuid": "cb9b9c7d-5cde-4e23-a707-a8fafbbbb057", 00:09:41.335 "is_configured": true, 00:09:41.335 "data_offset": 2048, 00:09:41.335 "data_size": 63488 00:09:41.335 }, 00:09:41.335 { 00:09:41.335 "name": "BaseBdev4", 00:09:41.335 "uuid": "758d5d00-f54d-4a87-b99f-a083b158e29e", 00:09:41.335 "is_configured": true, 00:09:41.335 "data_offset": 2048, 00:09:41.335 "data_size": 63488 00:09:41.335 } 00:09:41.335 ] 00:09:41.335 }' 00:09:41.335 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.335 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.595 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:41.595 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:41.595 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:41.595 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:41.595 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:41.595 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:41.595 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:41.595 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.595 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.595 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:41.595 [2024-12-06 11:52:39.332331] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.856 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.856 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:41.856 "name": "Existed_Raid", 00:09:41.856 "aliases": [ 00:09:41.856 "18f04ff8-ea7b-4a39-9754-94192586f6c7" 00:09:41.856 ], 00:09:41.856 "product_name": "Raid Volume", 00:09:41.856 "block_size": 512, 00:09:41.856 "num_blocks": 253952, 00:09:41.856 "uuid": "18f04ff8-ea7b-4a39-9754-94192586f6c7", 00:09:41.856 "assigned_rate_limits": { 00:09:41.856 "rw_ios_per_sec": 0, 00:09:41.856 "rw_mbytes_per_sec": 0, 00:09:41.856 "r_mbytes_per_sec": 0, 00:09:41.856 "w_mbytes_per_sec": 0 00:09:41.856 }, 00:09:41.856 "claimed": false, 00:09:41.856 "zoned": false, 00:09:41.856 "supported_io_types": { 00:09:41.856 "read": true, 00:09:41.856 "write": true, 00:09:41.856 "unmap": true, 00:09:41.856 "flush": true, 00:09:41.856 "reset": true, 00:09:41.856 "nvme_admin": false, 00:09:41.856 "nvme_io": false, 00:09:41.856 "nvme_io_md": false, 00:09:41.856 "write_zeroes": true, 00:09:41.856 "zcopy": false, 00:09:41.856 "get_zone_info": false, 00:09:41.856 "zone_management": false, 00:09:41.856 "zone_append": false, 00:09:41.856 "compare": false, 00:09:41.856 "compare_and_write": false, 00:09:41.856 "abort": false, 00:09:41.856 "seek_hole": false, 00:09:41.856 "seek_data": false, 00:09:41.856 "copy": false, 00:09:41.856 "nvme_iov_md": false 00:09:41.856 }, 00:09:41.856 "memory_domains": [ 00:09:41.856 { 00:09:41.856 "dma_device_id": "system", 00:09:41.856 "dma_device_type": 1 00:09:41.856 }, 00:09:41.856 { 00:09:41.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.856 "dma_device_type": 2 00:09:41.856 }, 00:09:41.856 { 00:09:41.856 "dma_device_id": "system", 00:09:41.856 "dma_device_type": 1 00:09:41.856 }, 00:09:41.856 { 00:09:41.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.856 "dma_device_type": 2 00:09:41.856 }, 00:09:41.856 { 00:09:41.856 "dma_device_id": "system", 00:09:41.856 "dma_device_type": 1 00:09:41.856 }, 00:09:41.856 { 00:09:41.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.856 "dma_device_type": 2 00:09:41.856 }, 00:09:41.856 { 00:09:41.856 "dma_device_id": "system", 00:09:41.856 "dma_device_type": 1 00:09:41.856 }, 00:09:41.856 { 00:09:41.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.856 "dma_device_type": 2 00:09:41.856 } 00:09:41.856 ], 00:09:41.856 "driver_specific": { 00:09:41.856 "raid": { 00:09:41.856 "uuid": "18f04ff8-ea7b-4a39-9754-94192586f6c7", 00:09:41.856 "strip_size_kb": 64, 00:09:41.856 "state": "online", 00:09:41.856 "raid_level": "concat", 00:09:41.856 "superblock": true, 00:09:41.856 "num_base_bdevs": 4, 00:09:41.856 "num_base_bdevs_discovered": 4, 00:09:41.856 "num_base_bdevs_operational": 4, 00:09:41.856 "base_bdevs_list": [ 00:09:41.856 { 00:09:41.856 "name": "NewBaseBdev", 00:09:41.856 "uuid": "f96e5f7f-75a1-49aa-ba6d-ec61eab046e8", 00:09:41.856 "is_configured": true, 00:09:41.856 "data_offset": 2048, 00:09:41.856 "data_size": 63488 00:09:41.856 }, 00:09:41.856 { 00:09:41.856 "name": "BaseBdev2", 00:09:41.856 "uuid": "dd1e0e80-5dc1-487e-be6b-be2be5ae6bfb", 00:09:41.856 "is_configured": true, 00:09:41.856 "data_offset": 2048, 00:09:41.856 "data_size": 63488 00:09:41.856 }, 00:09:41.856 { 00:09:41.856 "name": "BaseBdev3", 00:09:41.856 "uuid": "cb9b9c7d-5cde-4e23-a707-a8fafbbbb057", 00:09:41.856 "is_configured": true, 00:09:41.856 "data_offset": 2048, 00:09:41.856 "data_size": 63488 00:09:41.856 }, 00:09:41.856 { 00:09:41.856 "name": "BaseBdev4", 00:09:41.856 "uuid": "758d5d00-f54d-4a87-b99f-a083b158e29e", 00:09:41.856 "is_configured": true, 00:09:41.856 "data_offset": 2048, 00:09:41.856 "data_size": 63488 00:09:41.856 } 00:09:41.856 ] 00:09:41.856 } 00:09:41.856 } 00:09:41.856 }' 00:09:41.856 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:41.856 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:41.856 BaseBdev2 00:09:41.856 BaseBdev3 00:09:41.856 BaseBdev4' 00:09:41.856 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.856 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:41.856 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.856 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:41.856 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.856 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.856 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.856 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.856 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.856 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.856 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.856 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.856 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:41.856 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.856 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.856 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.856 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.856 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.857 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.857 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.857 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:41.857 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.857 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.857 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.857 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.857 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.857 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.857 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:41.857 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.857 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.857 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.857 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.857 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.857 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.857 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:41.857 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.857 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.857 [2024-12-06 11:52:39.607558] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:41.857 [2024-12-06 11:52:39.607587] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:41.857 [2024-12-06 11:52:39.607671] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.857 [2024-12-06 11:52:39.607733] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:41.857 [2024-12-06 11:52:39.607748] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:42.116 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.116 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 82477 00:09:42.116 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 82477 ']' 00:09:42.116 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 82477 00:09:42.116 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:42.116 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:42.116 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82477 00:09:42.116 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:42.116 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:42.116 killing process with pid 82477 00:09:42.116 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82477' 00:09:42.116 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 82477 00:09:42.116 [2024-12-06 11:52:39.657681] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:42.116 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 82477 00:09:42.116 [2024-12-06 11:52:39.697908] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:42.377 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:42.377 00:09:42.377 real 0m9.081s 00:09:42.377 user 0m15.476s 00:09:42.377 sys 0m1.931s 00:09:42.377 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:42.377 11:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.377 ************************************ 00:09:42.377 END TEST raid_state_function_test_sb 00:09:42.377 ************************************ 00:09:42.377 11:52:39 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:09:42.377 11:52:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:42.377 11:52:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:42.377 11:52:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:42.377 ************************************ 00:09:42.377 START TEST raid_superblock_test 00:09:42.377 ************************************ 00:09:42.377 11:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:09:42.377 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:42.377 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:42.377 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:42.377 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:42.377 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:42.377 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:42.377 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:42.377 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:42.377 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:42.377 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:42.377 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:42.377 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:42.377 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:42.377 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:42.377 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:42.377 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:42.377 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83125 00:09:42.377 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:42.377 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83125 00:09:42.377 11:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 83125 ']' 00:09:42.377 11:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.377 11:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:42.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.377 11:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.377 11:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:42.377 11:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.377 [2024-12-06 11:52:40.089811] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:42.377 [2024-12-06 11:52:40.089951] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83125 ] 00:09:42.637 [2024-12-06 11:52:40.233287] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.637 [2024-12-06 11:52:40.277370] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.637 [2024-12-06 11:52:40.319535] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.637 [2024-12-06 11:52:40.319576] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.206 11:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.207 malloc1 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.207 [2024-12-06 11:52:40.929041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:43.207 [2024-12-06 11:52:40.929116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.207 [2024-12-06 11:52:40.929132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:09:43.207 [2024-12-06 11:52:40.929152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.207 [2024-12-06 11:52:40.931149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.207 [2024-12-06 11:52:40.931187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:43.207 pt1 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.207 malloc2 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.207 11:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.467 [2024-12-06 11:52:40.965260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:43.467 [2024-12-06 11:52:40.965320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.467 [2024-12-06 11:52:40.965337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:43.467 [2024-12-06 11:52:40.965350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.467 [2024-12-06 11:52:40.967401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.467 [2024-12-06 11:52:40.967440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:43.467 pt2 00:09:43.467 11:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.467 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:43.467 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:43.467 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:43.467 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:43.467 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:43.467 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:43.467 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:43.467 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:43.467 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:43.467 11:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.467 11:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.467 malloc3 00:09:43.467 11:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.467 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:43.467 11:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.467 11:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.467 [2024-12-06 11:52:40.993512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:43.467 [2024-12-06 11:52:40.993579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.467 [2024-12-06 11:52:40.993593] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:43.467 [2024-12-06 11:52:40.993603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.467 [2024-12-06 11:52:40.995600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.467 [2024-12-06 11:52:40.995639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:43.467 pt3 00:09:43.468 11:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.468 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:43.468 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:43.468 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:09:43.468 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:09:43.468 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:43.468 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:43.468 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:43.468 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:43.468 11:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.468 malloc4 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.468 [2024-12-06 11:52:41.021846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:43.468 [2024-12-06 11:52:41.021911] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.468 [2024-12-06 11:52:41.021926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:43.468 [2024-12-06 11:52:41.021938] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.468 [2024-12-06 11:52:41.023939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.468 [2024-12-06 11:52:41.023979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:43.468 pt4 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.468 [2024-12-06 11:52:41.033874] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:43.468 [2024-12-06 11:52:41.035678] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:43.468 [2024-12-06 11:52:41.035745] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:43.468 [2024-12-06 11:52:41.035785] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:43.468 [2024-12-06 11:52:41.035923] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:09:43.468 [2024-12-06 11:52:41.035936] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:43.468 [2024-12-06 11:52:41.036185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:43.468 [2024-12-06 11:52:41.036338] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:09:43.468 [2024-12-06 11:52:41.036352] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:09:43.468 [2024-12-06 11:52:41.036468] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.468 "name": "raid_bdev1", 00:09:43.468 "uuid": "dd0d955e-3d5c-45fc-883f-ca460c8f2b0e", 00:09:43.468 "strip_size_kb": 64, 00:09:43.468 "state": "online", 00:09:43.468 "raid_level": "concat", 00:09:43.468 "superblock": true, 00:09:43.468 "num_base_bdevs": 4, 00:09:43.468 "num_base_bdevs_discovered": 4, 00:09:43.468 "num_base_bdevs_operational": 4, 00:09:43.468 "base_bdevs_list": [ 00:09:43.468 { 00:09:43.468 "name": "pt1", 00:09:43.468 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:43.468 "is_configured": true, 00:09:43.468 "data_offset": 2048, 00:09:43.468 "data_size": 63488 00:09:43.468 }, 00:09:43.468 { 00:09:43.468 "name": "pt2", 00:09:43.468 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:43.468 "is_configured": true, 00:09:43.468 "data_offset": 2048, 00:09:43.468 "data_size": 63488 00:09:43.468 }, 00:09:43.468 { 00:09:43.468 "name": "pt3", 00:09:43.468 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:43.468 "is_configured": true, 00:09:43.468 "data_offset": 2048, 00:09:43.468 "data_size": 63488 00:09:43.468 }, 00:09:43.468 { 00:09:43.468 "name": "pt4", 00:09:43.468 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:43.468 "is_configured": true, 00:09:43.468 "data_offset": 2048, 00:09:43.468 "data_size": 63488 00:09:43.468 } 00:09:43.468 ] 00:09:43.468 }' 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.468 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.040 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:44.040 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:44.040 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:44.040 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:44.040 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:44.040 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:44.040 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:44.040 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:44.040 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.040 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.040 [2024-12-06 11:52:41.505332] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.040 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.040 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:44.040 "name": "raid_bdev1", 00:09:44.040 "aliases": [ 00:09:44.040 "dd0d955e-3d5c-45fc-883f-ca460c8f2b0e" 00:09:44.040 ], 00:09:44.040 "product_name": "Raid Volume", 00:09:44.040 "block_size": 512, 00:09:44.040 "num_blocks": 253952, 00:09:44.040 "uuid": "dd0d955e-3d5c-45fc-883f-ca460c8f2b0e", 00:09:44.040 "assigned_rate_limits": { 00:09:44.040 "rw_ios_per_sec": 0, 00:09:44.040 "rw_mbytes_per_sec": 0, 00:09:44.040 "r_mbytes_per_sec": 0, 00:09:44.040 "w_mbytes_per_sec": 0 00:09:44.040 }, 00:09:44.040 "claimed": false, 00:09:44.040 "zoned": false, 00:09:44.040 "supported_io_types": { 00:09:44.040 "read": true, 00:09:44.040 "write": true, 00:09:44.040 "unmap": true, 00:09:44.040 "flush": true, 00:09:44.040 "reset": true, 00:09:44.040 "nvme_admin": false, 00:09:44.040 "nvme_io": false, 00:09:44.040 "nvme_io_md": false, 00:09:44.040 "write_zeroes": true, 00:09:44.040 "zcopy": false, 00:09:44.040 "get_zone_info": false, 00:09:44.040 "zone_management": false, 00:09:44.040 "zone_append": false, 00:09:44.040 "compare": false, 00:09:44.040 "compare_and_write": false, 00:09:44.040 "abort": false, 00:09:44.040 "seek_hole": false, 00:09:44.040 "seek_data": false, 00:09:44.040 "copy": false, 00:09:44.040 "nvme_iov_md": false 00:09:44.040 }, 00:09:44.040 "memory_domains": [ 00:09:44.040 { 00:09:44.040 "dma_device_id": "system", 00:09:44.040 "dma_device_type": 1 00:09:44.040 }, 00:09:44.040 { 00:09:44.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.040 "dma_device_type": 2 00:09:44.040 }, 00:09:44.040 { 00:09:44.040 "dma_device_id": "system", 00:09:44.040 "dma_device_type": 1 00:09:44.040 }, 00:09:44.040 { 00:09:44.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.040 "dma_device_type": 2 00:09:44.040 }, 00:09:44.041 { 00:09:44.041 "dma_device_id": "system", 00:09:44.041 "dma_device_type": 1 00:09:44.041 }, 00:09:44.041 { 00:09:44.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.041 "dma_device_type": 2 00:09:44.041 }, 00:09:44.041 { 00:09:44.041 "dma_device_id": "system", 00:09:44.041 "dma_device_type": 1 00:09:44.041 }, 00:09:44.041 { 00:09:44.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.041 "dma_device_type": 2 00:09:44.041 } 00:09:44.041 ], 00:09:44.041 "driver_specific": { 00:09:44.041 "raid": { 00:09:44.041 "uuid": "dd0d955e-3d5c-45fc-883f-ca460c8f2b0e", 00:09:44.041 "strip_size_kb": 64, 00:09:44.041 "state": "online", 00:09:44.041 "raid_level": "concat", 00:09:44.041 "superblock": true, 00:09:44.041 "num_base_bdevs": 4, 00:09:44.041 "num_base_bdevs_discovered": 4, 00:09:44.041 "num_base_bdevs_operational": 4, 00:09:44.041 "base_bdevs_list": [ 00:09:44.041 { 00:09:44.041 "name": "pt1", 00:09:44.041 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:44.041 "is_configured": true, 00:09:44.041 "data_offset": 2048, 00:09:44.041 "data_size": 63488 00:09:44.041 }, 00:09:44.041 { 00:09:44.041 "name": "pt2", 00:09:44.041 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.041 "is_configured": true, 00:09:44.041 "data_offset": 2048, 00:09:44.041 "data_size": 63488 00:09:44.041 }, 00:09:44.041 { 00:09:44.041 "name": "pt3", 00:09:44.041 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.041 "is_configured": true, 00:09:44.041 "data_offset": 2048, 00:09:44.041 "data_size": 63488 00:09:44.041 }, 00:09:44.041 { 00:09:44.041 "name": "pt4", 00:09:44.041 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:44.041 "is_configured": true, 00:09:44.041 "data_offset": 2048, 00:09:44.041 "data_size": 63488 00:09:44.041 } 00:09:44.041 ] 00:09:44.041 } 00:09:44.041 } 00:09:44.041 }' 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:44.041 pt2 00:09:44.041 pt3 00:09:44.041 pt4' 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:44.041 [2024-12-06 11:52:41.760811] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.041 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=dd0d955e-3d5c-45fc-883f-ca460c8f2b0e 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z dd0d955e-3d5c-45fc-883f-ca460c8f2b0e ']' 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.302 [2024-12-06 11:52:41.812476] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:44.302 [2024-12-06 11:52:41.812510] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:44.302 [2024-12-06 11:52:41.812593] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.302 [2024-12-06 11:52:41.812658] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:44.302 [2024-12-06 11:52:41.812679] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.302 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.302 [2024-12-06 11:52:41.968244] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:44.302 [2024-12-06 11:52:41.970023] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:44.302 [2024-12-06 11:52:41.970072] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:44.302 [2024-12-06 11:52:41.970107] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:44.302 [2024-12-06 11:52:41.970154] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:44.302 [2024-12-06 11:52:41.970203] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:44.302 [2024-12-06 11:52:41.970223] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:44.303 [2024-12-06 11:52:41.970253] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:09:44.303 [2024-12-06 11:52:41.970267] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:44.303 [2024-12-06 11:52:41.970276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:09:44.303 request: 00:09:44.303 { 00:09:44.303 "name": "raid_bdev1", 00:09:44.303 "raid_level": "concat", 00:09:44.303 "base_bdevs": [ 00:09:44.303 "malloc1", 00:09:44.303 "malloc2", 00:09:44.303 "malloc3", 00:09:44.303 "malloc4" 00:09:44.303 ], 00:09:44.303 "strip_size_kb": 64, 00:09:44.303 "superblock": false, 00:09:44.303 "method": "bdev_raid_create", 00:09:44.303 "req_id": 1 00:09:44.303 } 00:09:44.303 Got JSON-RPC error response 00:09:44.303 response: 00:09:44.303 { 00:09:44.303 "code": -17, 00:09:44.303 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:44.303 } 00:09:44.303 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:44.303 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:44.303 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:44.303 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:44.303 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:44.303 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.303 11:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:44.303 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.303 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.303 11:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.303 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:44.303 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:44.303 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:44.303 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.303 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.303 [2024-12-06 11:52:42.036080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:44.303 [2024-12-06 11:52:42.036126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.303 [2024-12-06 11:52:42.036145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:44.303 [2024-12-06 11:52:42.036153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.303 [2024-12-06 11:52:42.038150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.303 [2024-12-06 11:52:42.038185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:44.303 [2024-12-06 11:52:42.038255] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:44.303 [2024-12-06 11:52:42.038295] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:44.303 pt1 00:09:44.303 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.303 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:09:44.303 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.303 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.303 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.303 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.303 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:44.303 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.303 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.303 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.303 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.303 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.303 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.303 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.303 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.563 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.563 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.563 "name": "raid_bdev1", 00:09:44.563 "uuid": "dd0d955e-3d5c-45fc-883f-ca460c8f2b0e", 00:09:44.563 "strip_size_kb": 64, 00:09:44.563 "state": "configuring", 00:09:44.563 "raid_level": "concat", 00:09:44.563 "superblock": true, 00:09:44.563 "num_base_bdevs": 4, 00:09:44.563 "num_base_bdevs_discovered": 1, 00:09:44.563 "num_base_bdevs_operational": 4, 00:09:44.563 "base_bdevs_list": [ 00:09:44.563 { 00:09:44.563 "name": "pt1", 00:09:44.563 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:44.563 "is_configured": true, 00:09:44.563 "data_offset": 2048, 00:09:44.563 "data_size": 63488 00:09:44.563 }, 00:09:44.563 { 00:09:44.563 "name": null, 00:09:44.563 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.563 "is_configured": false, 00:09:44.563 "data_offset": 2048, 00:09:44.563 "data_size": 63488 00:09:44.563 }, 00:09:44.563 { 00:09:44.563 "name": null, 00:09:44.563 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.563 "is_configured": false, 00:09:44.563 "data_offset": 2048, 00:09:44.563 "data_size": 63488 00:09:44.563 }, 00:09:44.563 { 00:09:44.563 "name": null, 00:09:44.563 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:44.563 "is_configured": false, 00:09:44.563 "data_offset": 2048, 00:09:44.563 "data_size": 63488 00:09:44.563 } 00:09:44.563 ] 00:09:44.563 }' 00:09:44.563 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.563 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.824 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:09:44.824 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:44.824 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.824 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.824 [2024-12-06 11:52:42.487322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:44.824 [2024-12-06 11:52:42.487366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.824 [2024-12-06 11:52:42.487383] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:44.824 [2024-12-06 11:52:42.487391] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.824 [2024-12-06 11:52:42.487717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.824 [2024-12-06 11:52:42.487740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:44.824 [2024-12-06 11:52:42.487801] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:44.824 [2024-12-06 11:52:42.487827] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:44.824 pt2 00:09:44.824 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.824 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:44.824 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.824 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.824 [2024-12-06 11:52:42.499336] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:44.824 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.824 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:09:44.824 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.824 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.824 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.824 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.824 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:44.824 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.824 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.824 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.824 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.824 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.824 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.824 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.824 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.824 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.824 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.824 "name": "raid_bdev1", 00:09:44.824 "uuid": "dd0d955e-3d5c-45fc-883f-ca460c8f2b0e", 00:09:44.824 "strip_size_kb": 64, 00:09:44.824 "state": "configuring", 00:09:44.824 "raid_level": "concat", 00:09:44.824 "superblock": true, 00:09:44.824 "num_base_bdevs": 4, 00:09:44.824 "num_base_bdevs_discovered": 1, 00:09:44.824 "num_base_bdevs_operational": 4, 00:09:44.824 "base_bdevs_list": [ 00:09:44.824 { 00:09:44.824 "name": "pt1", 00:09:44.824 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:44.824 "is_configured": true, 00:09:44.824 "data_offset": 2048, 00:09:44.824 "data_size": 63488 00:09:44.824 }, 00:09:44.824 { 00:09:44.824 "name": null, 00:09:44.824 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.824 "is_configured": false, 00:09:44.824 "data_offset": 0, 00:09:44.824 "data_size": 63488 00:09:44.824 }, 00:09:44.824 { 00:09:44.824 "name": null, 00:09:44.824 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.824 "is_configured": false, 00:09:44.824 "data_offset": 2048, 00:09:44.824 "data_size": 63488 00:09:44.824 }, 00:09:44.824 { 00:09:44.824 "name": null, 00:09:44.824 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:44.824 "is_configured": false, 00:09:44.824 "data_offset": 2048, 00:09:44.824 "data_size": 63488 00:09:44.824 } 00:09:44.824 ] 00:09:44.824 }' 00:09:44.824 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.824 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.394 [2024-12-06 11:52:42.922563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:45.394 [2024-12-06 11:52:42.922619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.394 [2024-12-06 11:52:42.922633] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:45.394 [2024-12-06 11:52:42.922643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.394 [2024-12-06 11:52:42.922955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.394 [2024-12-06 11:52:42.922980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:45.394 [2024-12-06 11:52:42.923033] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:45.394 [2024-12-06 11:52:42.923052] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:45.394 pt2 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.394 [2024-12-06 11:52:42.934513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:45.394 [2024-12-06 11:52:42.934577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.394 [2024-12-06 11:52:42.934592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:45.394 [2024-12-06 11:52:42.934609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.394 [2024-12-06 11:52:42.934892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.394 [2024-12-06 11:52:42.934917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:45.394 [2024-12-06 11:52:42.934969] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:45.394 [2024-12-06 11:52:42.934988] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:45.394 pt3 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.394 [2024-12-06 11:52:42.946514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:45.394 [2024-12-06 11:52:42.946559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.394 [2024-12-06 11:52:42.946571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:45.394 [2024-12-06 11:52:42.946580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.394 [2024-12-06 11:52:42.946852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.394 [2024-12-06 11:52:42.946879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:45.394 [2024-12-06 11:52:42.946924] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:45.394 [2024-12-06 11:52:42.946942] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:45.394 [2024-12-06 11:52:42.947038] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:45.394 [2024-12-06 11:52:42.947052] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:45.394 [2024-12-06 11:52:42.947275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:09:45.394 [2024-12-06 11:52:42.947407] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:45.394 [2024-12-06 11:52:42.947416] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:09:45.394 [2024-12-06 11:52:42.947509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.394 pt4 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.394 11:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.394 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.394 "name": "raid_bdev1", 00:09:45.394 "uuid": "dd0d955e-3d5c-45fc-883f-ca460c8f2b0e", 00:09:45.394 "strip_size_kb": 64, 00:09:45.394 "state": "online", 00:09:45.394 "raid_level": "concat", 00:09:45.394 "superblock": true, 00:09:45.394 "num_base_bdevs": 4, 00:09:45.394 "num_base_bdevs_discovered": 4, 00:09:45.394 "num_base_bdevs_operational": 4, 00:09:45.394 "base_bdevs_list": [ 00:09:45.394 { 00:09:45.394 "name": "pt1", 00:09:45.394 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:45.394 "is_configured": true, 00:09:45.394 "data_offset": 2048, 00:09:45.394 "data_size": 63488 00:09:45.394 }, 00:09:45.394 { 00:09:45.394 "name": "pt2", 00:09:45.394 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.394 "is_configured": true, 00:09:45.394 "data_offset": 2048, 00:09:45.394 "data_size": 63488 00:09:45.394 }, 00:09:45.394 { 00:09:45.394 "name": "pt3", 00:09:45.394 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:45.394 "is_configured": true, 00:09:45.394 "data_offset": 2048, 00:09:45.394 "data_size": 63488 00:09:45.394 }, 00:09:45.394 { 00:09:45.394 "name": "pt4", 00:09:45.394 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:45.394 "is_configured": true, 00:09:45.394 "data_offset": 2048, 00:09:45.394 "data_size": 63488 00:09:45.394 } 00:09:45.394 ] 00:09:45.394 }' 00:09:45.394 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.394 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.655 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:45.655 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:45.655 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:45.655 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:45.655 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:45.655 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:45.655 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:45.655 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.655 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.655 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:45.655 [2024-12-06 11:52:43.386059] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.655 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.915 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:45.915 "name": "raid_bdev1", 00:09:45.915 "aliases": [ 00:09:45.915 "dd0d955e-3d5c-45fc-883f-ca460c8f2b0e" 00:09:45.915 ], 00:09:45.915 "product_name": "Raid Volume", 00:09:45.915 "block_size": 512, 00:09:45.915 "num_blocks": 253952, 00:09:45.915 "uuid": "dd0d955e-3d5c-45fc-883f-ca460c8f2b0e", 00:09:45.915 "assigned_rate_limits": { 00:09:45.915 "rw_ios_per_sec": 0, 00:09:45.915 "rw_mbytes_per_sec": 0, 00:09:45.915 "r_mbytes_per_sec": 0, 00:09:45.915 "w_mbytes_per_sec": 0 00:09:45.915 }, 00:09:45.915 "claimed": false, 00:09:45.915 "zoned": false, 00:09:45.915 "supported_io_types": { 00:09:45.915 "read": true, 00:09:45.915 "write": true, 00:09:45.915 "unmap": true, 00:09:45.915 "flush": true, 00:09:45.915 "reset": true, 00:09:45.915 "nvme_admin": false, 00:09:45.915 "nvme_io": false, 00:09:45.915 "nvme_io_md": false, 00:09:45.915 "write_zeroes": true, 00:09:45.915 "zcopy": false, 00:09:45.915 "get_zone_info": false, 00:09:45.915 "zone_management": false, 00:09:45.915 "zone_append": false, 00:09:45.915 "compare": false, 00:09:45.915 "compare_and_write": false, 00:09:45.915 "abort": false, 00:09:45.915 "seek_hole": false, 00:09:45.915 "seek_data": false, 00:09:45.915 "copy": false, 00:09:45.915 "nvme_iov_md": false 00:09:45.915 }, 00:09:45.915 "memory_domains": [ 00:09:45.915 { 00:09:45.915 "dma_device_id": "system", 00:09:45.915 "dma_device_type": 1 00:09:45.915 }, 00:09:45.915 { 00:09:45.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.915 "dma_device_type": 2 00:09:45.915 }, 00:09:45.915 { 00:09:45.915 "dma_device_id": "system", 00:09:45.915 "dma_device_type": 1 00:09:45.915 }, 00:09:45.915 { 00:09:45.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.915 "dma_device_type": 2 00:09:45.915 }, 00:09:45.915 { 00:09:45.915 "dma_device_id": "system", 00:09:45.915 "dma_device_type": 1 00:09:45.915 }, 00:09:45.915 { 00:09:45.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.915 "dma_device_type": 2 00:09:45.915 }, 00:09:45.915 { 00:09:45.915 "dma_device_id": "system", 00:09:45.915 "dma_device_type": 1 00:09:45.915 }, 00:09:45.915 { 00:09:45.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.915 "dma_device_type": 2 00:09:45.915 } 00:09:45.915 ], 00:09:45.915 "driver_specific": { 00:09:45.915 "raid": { 00:09:45.915 "uuid": "dd0d955e-3d5c-45fc-883f-ca460c8f2b0e", 00:09:45.915 "strip_size_kb": 64, 00:09:45.915 "state": "online", 00:09:45.915 "raid_level": "concat", 00:09:45.915 "superblock": true, 00:09:45.915 "num_base_bdevs": 4, 00:09:45.915 "num_base_bdevs_discovered": 4, 00:09:45.915 "num_base_bdevs_operational": 4, 00:09:45.915 "base_bdevs_list": [ 00:09:45.915 { 00:09:45.915 "name": "pt1", 00:09:45.915 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:45.915 "is_configured": true, 00:09:45.915 "data_offset": 2048, 00:09:45.915 "data_size": 63488 00:09:45.915 }, 00:09:45.915 { 00:09:45.915 "name": "pt2", 00:09:45.915 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.915 "is_configured": true, 00:09:45.915 "data_offset": 2048, 00:09:45.915 "data_size": 63488 00:09:45.915 }, 00:09:45.915 { 00:09:45.915 "name": "pt3", 00:09:45.915 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:45.915 "is_configured": true, 00:09:45.915 "data_offset": 2048, 00:09:45.915 "data_size": 63488 00:09:45.915 }, 00:09:45.915 { 00:09:45.915 "name": "pt4", 00:09:45.915 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:45.915 "is_configured": true, 00:09:45.915 "data_offset": 2048, 00:09:45.915 "data_size": 63488 00:09:45.915 } 00:09:45.915 ] 00:09:45.915 } 00:09:45.915 } 00:09:45.915 }' 00:09:45.915 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:45.915 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:45.915 pt2 00:09:45.915 pt3 00:09:45.915 pt4' 00:09:45.915 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.915 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:45.915 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.915 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:45.915 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.915 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.915 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.915 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.915 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.915 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.915 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.915 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:45.915 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.916 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.916 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.916 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.916 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.916 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.916 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.916 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:45.916 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.916 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.916 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.916 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.176 [2024-12-06 11:52:43.733482] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' dd0d955e-3d5c-45fc-883f-ca460c8f2b0e '!=' dd0d955e-3d5c-45fc-883f-ca460c8f2b0e ']' 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83125 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 83125 ']' 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 83125 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83125 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:46.176 killing process with pid 83125 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83125' 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 83125 00:09:46.176 [2024-12-06 11:52:43.813037] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:46.176 [2024-12-06 11:52:43.813115] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.176 [2024-12-06 11:52:43.813179] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:46.176 [2024-12-06 11:52:43.813190] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:09:46.176 11:52:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 83125 00:09:46.176 [2024-12-06 11:52:43.856983] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:46.435 11:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:46.435 00:09:46.435 real 0m4.087s 00:09:46.435 user 0m6.470s 00:09:46.435 sys 0m0.863s 00:09:46.435 11:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.435 11:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.435 ************************************ 00:09:46.435 END TEST raid_superblock_test 00:09:46.435 ************************************ 00:09:46.435 11:52:44 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:09:46.435 11:52:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:46.435 11:52:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.435 11:52:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:46.435 ************************************ 00:09:46.435 START TEST raid_read_error_test 00:09:46.435 ************************************ 00:09:46.435 11:52:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:09:46.435 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:46.435 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:46.435 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:46.435 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:46.435 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:46.435 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:46.435 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:46.435 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:46.435 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:46.435 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:46.435 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:46.435 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:46.435 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:46.435 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:46.435 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:46.435 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:46.435 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:46.435 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:46.435 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:46.435 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:46.435 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:46.435 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:46.435 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:46.435 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:46.435 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:46.435 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:46.435 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:46.435 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:46.694 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.aHJW8k2bx5 00:09:46.694 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83373 00:09:46.694 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:46.694 11:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83373 00:09:46.694 11:52:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 83373 ']' 00:09:46.694 11:52:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.694 11:52:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:46.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.694 11:52:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.694 11:52:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:46.694 11:52:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.694 [2024-12-06 11:52:44.272269] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:46.694 [2024-12-06 11:52:44.272395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83373 ] 00:09:46.694 [2024-12-06 11:52:44.396584] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.694 [2024-12-06 11:52:44.440040] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.954 [2024-12-06 11:52:44.481476] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.954 [2024-12-06 11:52:44.481512] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.541 BaseBdev1_malloc 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.541 true 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.541 [2024-12-06 11:52:45.134526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:47.541 [2024-12-06 11:52:45.134582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.541 [2024-12-06 11:52:45.134603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:47.541 [2024-12-06 11:52:45.134618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.541 [2024-12-06 11:52:45.136659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.541 [2024-12-06 11:52:45.136765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:47.541 BaseBdev1 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.541 BaseBdev2_malloc 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.541 true 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.541 [2024-12-06 11:52:45.182984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:47.541 [2024-12-06 11:52:45.183096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.541 [2024-12-06 11:52:45.183122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:47.541 [2024-12-06 11:52:45.183130] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.541 [2024-12-06 11:52:45.185103] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.541 [2024-12-06 11:52:45.185139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:47.541 BaseBdev2 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.541 BaseBdev3_malloc 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.541 true 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.541 [2024-12-06 11:52:45.220162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:47.541 [2024-12-06 11:52:45.220210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.541 [2024-12-06 11:52:45.220228] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:47.541 [2024-12-06 11:52:45.220252] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.541 [2024-12-06 11:52:45.222222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.541 [2024-12-06 11:52:45.222265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:47.541 BaseBdev3 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.541 BaseBdev4_malloc 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.541 true 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.541 [2024-12-06 11:52:45.260568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:47.541 [2024-12-06 11:52:45.260651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.541 [2024-12-06 11:52:45.260676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:47.541 [2024-12-06 11:52:45.260685] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.541 [2024-12-06 11:52:45.262687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.541 [2024-12-06 11:52:45.262724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:47.541 BaseBdev4 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.541 [2024-12-06 11:52:45.272626] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:47.541 [2024-12-06 11:52:45.274390] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:47.541 [2024-12-06 11:52:45.274464] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:47.541 [2024-12-06 11:52:45.274526] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:47.541 [2024-12-06 11:52:45.274714] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:09:47.541 [2024-12-06 11:52:45.274725] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:47.541 [2024-12-06 11:52:45.274957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:47.541 [2024-12-06 11:52:45.275089] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:09:47.541 [2024-12-06 11:52:45.275101] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:09:47.541 [2024-12-06 11:52:45.275219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.541 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.804 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.804 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.804 "name": "raid_bdev1", 00:09:47.804 "uuid": "88fb38df-e5e1-467e-ad8f-2d8b471a5375", 00:09:47.804 "strip_size_kb": 64, 00:09:47.804 "state": "online", 00:09:47.804 "raid_level": "concat", 00:09:47.804 "superblock": true, 00:09:47.804 "num_base_bdevs": 4, 00:09:47.804 "num_base_bdevs_discovered": 4, 00:09:47.804 "num_base_bdevs_operational": 4, 00:09:47.804 "base_bdevs_list": [ 00:09:47.804 { 00:09:47.804 "name": "BaseBdev1", 00:09:47.804 "uuid": "077e7e67-25c1-55b9-90c2-06e13726e815", 00:09:47.804 "is_configured": true, 00:09:47.804 "data_offset": 2048, 00:09:47.804 "data_size": 63488 00:09:47.804 }, 00:09:47.804 { 00:09:47.804 "name": "BaseBdev2", 00:09:47.804 "uuid": "6c94ed78-c6dd-5c7b-b44f-fd57c01e6974", 00:09:47.804 "is_configured": true, 00:09:47.804 "data_offset": 2048, 00:09:47.804 "data_size": 63488 00:09:47.804 }, 00:09:47.804 { 00:09:47.804 "name": "BaseBdev3", 00:09:47.804 "uuid": "a6cdfdd2-c8ea-586e-8bab-236f1c18da5b", 00:09:47.804 "is_configured": true, 00:09:47.804 "data_offset": 2048, 00:09:47.804 "data_size": 63488 00:09:47.804 }, 00:09:47.804 { 00:09:47.804 "name": "BaseBdev4", 00:09:47.804 "uuid": "8dfacf63-298b-51ce-b160-1c926e23cd10", 00:09:47.804 "is_configured": true, 00:09:47.804 "data_offset": 2048, 00:09:47.804 "data_size": 63488 00:09:47.804 } 00:09:47.804 ] 00:09:47.804 }' 00:09:47.804 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.804 11:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.063 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:48.063 11:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:48.063 [2024-12-06 11:52:45.800027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:09:49.001 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:49.001 11:52:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.001 11:52:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.001 11:52:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.001 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:49.001 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:49.001 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:49.001 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:49.001 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.001 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.001 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.001 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.001 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:49.001 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.001 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.001 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.001 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.001 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.001 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.001 11:52:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.001 11:52:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.262 11:52:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.262 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.262 "name": "raid_bdev1", 00:09:49.262 "uuid": "88fb38df-e5e1-467e-ad8f-2d8b471a5375", 00:09:49.262 "strip_size_kb": 64, 00:09:49.262 "state": "online", 00:09:49.262 "raid_level": "concat", 00:09:49.262 "superblock": true, 00:09:49.262 "num_base_bdevs": 4, 00:09:49.262 "num_base_bdevs_discovered": 4, 00:09:49.262 "num_base_bdevs_operational": 4, 00:09:49.262 "base_bdevs_list": [ 00:09:49.262 { 00:09:49.262 "name": "BaseBdev1", 00:09:49.262 "uuid": "077e7e67-25c1-55b9-90c2-06e13726e815", 00:09:49.262 "is_configured": true, 00:09:49.262 "data_offset": 2048, 00:09:49.262 "data_size": 63488 00:09:49.262 }, 00:09:49.262 { 00:09:49.262 "name": "BaseBdev2", 00:09:49.262 "uuid": "6c94ed78-c6dd-5c7b-b44f-fd57c01e6974", 00:09:49.262 "is_configured": true, 00:09:49.262 "data_offset": 2048, 00:09:49.262 "data_size": 63488 00:09:49.262 }, 00:09:49.262 { 00:09:49.262 "name": "BaseBdev3", 00:09:49.262 "uuid": "a6cdfdd2-c8ea-586e-8bab-236f1c18da5b", 00:09:49.262 "is_configured": true, 00:09:49.262 "data_offset": 2048, 00:09:49.262 "data_size": 63488 00:09:49.262 }, 00:09:49.262 { 00:09:49.262 "name": "BaseBdev4", 00:09:49.262 "uuid": "8dfacf63-298b-51ce-b160-1c926e23cd10", 00:09:49.262 "is_configured": true, 00:09:49.262 "data_offset": 2048, 00:09:49.262 "data_size": 63488 00:09:49.262 } 00:09:49.262 ] 00:09:49.262 }' 00:09:49.262 11:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.262 11:52:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.523 11:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:49.523 11:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.523 11:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.523 [2024-12-06 11:52:47.191783] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:49.523 [2024-12-06 11:52:47.191889] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:49.523 [2024-12-06 11:52:47.194419] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:49.523 [2024-12-06 11:52:47.194505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.523 [2024-12-06 11:52:47.194566] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:49.523 [2024-12-06 11:52:47.194608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:09:49.523 { 00:09:49.523 "results": [ 00:09:49.523 { 00:09:49.523 "job": "raid_bdev1", 00:09:49.523 "core_mask": "0x1", 00:09:49.523 "workload": "randrw", 00:09:49.523 "percentage": 50, 00:09:49.523 "status": "finished", 00:09:49.523 "queue_depth": 1, 00:09:49.523 "io_size": 131072, 00:09:49.523 "runtime": 1.392739, 00:09:49.523 "iops": 17320.546060676123, 00:09:49.523 "mibps": 2165.0682575845153, 00:09:49.523 "io_failed": 1, 00:09:49.523 "io_timeout": 0, 00:09:49.523 "avg_latency_us": 80.09061305525526, 00:09:49.523 "min_latency_us": 24.593886462882097, 00:09:49.523 "max_latency_us": 1373.6803493449781 00:09:49.523 } 00:09:49.523 ], 00:09:49.523 "core_count": 1 00:09:49.523 } 00:09:49.523 11:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.523 11:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83373 00:09:49.523 11:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 83373 ']' 00:09:49.523 11:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 83373 00:09:49.523 11:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:49.523 11:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:49.523 11:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83373 00:09:49.523 11:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:49.523 killing process with pid 83373 00:09:49.523 11:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:49.523 11:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83373' 00:09:49.523 11:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 83373 00:09:49.523 [2024-12-06 11:52:47.240617] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:49.523 11:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 83373 00:09:49.523 [2024-12-06 11:52:47.275957] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:49.783 11:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.aHJW8k2bx5 00:09:49.783 11:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:49.783 11:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:49.783 11:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:49.783 11:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:49.783 11:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:49.783 11:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:49.783 11:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:49.783 00:09:49.783 real 0m3.339s 00:09:49.783 user 0m4.206s 00:09:49.783 sys 0m0.539s 00:09:49.783 ************************************ 00:09:49.783 END TEST raid_read_error_test 00:09:49.783 ************************************ 00:09:49.783 11:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.783 11:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.043 11:52:47 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:09:50.043 11:52:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:50.043 11:52:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:50.043 11:52:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:50.043 ************************************ 00:09:50.043 START TEST raid_write_error_test 00:09:50.043 ************************************ 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GX0mdliKZx 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83508 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83508 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 83508 ']' 00:09:50.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:50.043 11:52:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.043 [2024-12-06 11:52:47.711473] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:50.043 [2024-12-06 11:52:47.712146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83508 ] 00:09:50.302 [2024-12-06 11:52:47.862536] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.302 [2024-12-06 11:52:47.905617] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.302 [2024-12-06 11:52:47.946742] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.302 [2024-12-06 11:52:47.946853] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.872 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:50.872 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:50.872 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:50.872 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:50.872 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.872 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.872 BaseBdev1_malloc 00:09:50.872 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.872 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:50.872 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.872 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.872 true 00:09:50.872 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.872 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:50.872 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.872 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.872 [2024-12-06 11:52:48.576059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:50.872 [2024-12-06 11:52:48.576154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.872 [2024-12-06 11:52:48.576199] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:50.872 [2024-12-06 11:52:48.576266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.872 [2024-12-06 11:52:48.578282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.872 [2024-12-06 11:52:48.578342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:50.872 BaseBdev1 00:09:50.872 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.872 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:50.872 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:50.872 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.872 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.872 BaseBdev2_malloc 00:09:50.872 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.872 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:50.872 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.872 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.872 true 00:09:50.872 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.872 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:50.872 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.872 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.131 [2024-12-06 11:52:48.631979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:51.131 [2024-12-06 11:52:48.632054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.131 [2024-12-06 11:52:48.632086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:51.131 [2024-12-06 11:52:48.632101] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.131 [2024-12-06 11:52:48.635207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.131 [2024-12-06 11:52:48.635269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:51.131 BaseBdev2 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.131 BaseBdev3_malloc 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.131 true 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.131 [2024-12-06 11:52:48.672750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:51.131 [2024-12-06 11:52:48.672792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.131 [2024-12-06 11:52:48.672810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:51.131 [2024-12-06 11:52:48.672818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.131 [2024-12-06 11:52:48.674785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.131 [2024-12-06 11:52:48.674821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:51.131 BaseBdev3 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.131 BaseBdev4_malloc 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.131 true 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.131 [2024-12-06 11:52:48.713050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:51.131 [2024-12-06 11:52:48.713092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.131 [2024-12-06 11:52:48.713112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:51.131 [2024-12-06 11:52:48.713119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.131 [2024-12-06 11:52:48.715134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.131 [2024-12-06 11:52:48.715168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:51.131 BaseBdev4 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.131 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.131 [2024-12-06 11:52:48.725091] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:51.131 [2024-12-06 11:52:48.726859] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:51.131 [2024-12-06 11:52:48.726939] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:51.131 [2024-12-06 11:52:48.726996] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:51.131 [2024-12-06 11:52:48.727180] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:09:51.131 [2024-12-06 11:52:48.727195] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:51.131 [2024-12-06 11:52:48.727477] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:51.132 [2024-12-06 11:52:48.727604] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:09:51.132 [2024-12-06 11:52:48.727621] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:09:51.132 [2024-12-06 11:52:48.727742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.132 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.132 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:51.132 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.132 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.132 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:51.132 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.132 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.132 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.132 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.132 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.132 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.132 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.132 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.132 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.132 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.132 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.132 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.132 "name": "raid_bdev1", 00:09:51.132 "uuid": "750bb23a-4090-427a-9a2c-981bf2a64ab9", 00:09:51.132 "strip_size_kb": 64, 00:09:51.132 "state": "online", 00:09:51.132 "raid_level": "concat", 00:09:51.132 "superblock": true, 00:09:51.132 "num_base_bdevs": 4, 00:09:51.132 "num_base_bdevs_discovered": 4, 00:09:51.132 "num_base_bdevs_operational": 4, 00:09:51.132 "base_bdevs_list": [ 00:09:51.132 { 00:09:51.132 "name": "BaseBdev1", 00:09:51.132 "uuid": "37cfe980-9042-5ac1-85ff-15067618cc53", 00:09:51.132 "is_configured": true, 00:09:51.132 "data_offset": 2048, 00:09:51.132 "data_size": 63488 00:09:51.132 }, 00:09:51.132 { 00:09:51.132 "name": "BaseBdev2", 00:09:51.132 "uuid": "ebfd1af0-d1ce-55c3-8a91-55832d30d65a", 00:09:51.132 "is_configured": true, 00:09:51.132 "data_offset": 2048, 00:09:51.132 "data_size": 63488 00:09:51.132 }, 00:09:51.132 { 00:09:51.132 "name": "BaseBdev3", 00:09:51.132 "uuid": "9f0c2d26-847d-54c1-8b68-1d0633874823", 00:09:51.132 "is_configured": true, 00:09:51.132 "data_offset": 2048, 00:09:51.132 "data_size": 63488 00:09:51.132 }, 00:09:51.132 { 00:09:51.132 "name": "BaseBdev4", 00:09:51.132 "uuid": "c12a89ce-88c0-5316-8559-1ed85b0f616e", 00:09:51.132 "is_configured": true, 00:09:51.132 "data_offset": 2048, 00:09:51.132 "data_size": 63488 00:09:51.132 } 00:09:51.132 ] 00:09:51.132 }' 00:09:51.132 11:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.132 11:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.390 11:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:51.390 11:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:51.648 [2024-12-06 11:52:49.208598] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:09:52.587 11:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:52.587 11:52:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.587 11:52:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.587 11:52:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.587 11:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:52.587 11:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:52.587 11:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:52.587 11:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:52.587 11:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.587 11:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.587 11:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:52.587 11:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.587 11:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.587 11:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.587 11:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.587 11:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.587 11:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.587 11:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.587 11:52:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.587 11:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.587 11:52:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.587 11:52:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.587 11:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.587 "name": "raid_bdev1", 00:09:52.587 "uuid": "750bb23a-4090-427a-9a2c-981bf2a64ab9", 00:09:52.587 "strip_size_kb": 64, 00:09:52.587 "state": "online", 00:09:52.587 "raid_level": "concat", 00:09:52.587 "superblock": true, 00:09:52.587 "num_base_bdevs": 4, 00:09:52.587 "num_base_bdevs_discovered": 4, 00:09:52.587 "num_base_bdevs_operational": 4, 00:09:52.587 "base_bdevs_list": [ 00:09:52.587 { 00:09:52.587 "name": "BaseBdev1", 00:09:52.587 "uuid": "37cfe980-9042-5ac1-85ff-15067618cc53", 00:09:52.587 "is_configured": true, 00:09:52.587 "data_offset": 2048, 00:09:52.587 "data_size": 63488 00:09:52.587 }, 00:09:52.587 { 00:09:52.587 "name": "BaseBdev2", 00:09:52.587 "uuid": "ebfd1af0-d1ce-55c3-8a91-55832d30d65a", 00:09:52.587 "is_configured": true, 00:09:52.587 "data_offset": 2048, 00:09:52.587 "data_size": 63488 00:09:52.587 }, 00:09:52.587 { 00:09:52.587 "name": "BaseBdev3", 00:09:52.587 "uuid": "9f0c2d26-847d-54c1-8b68-1d0633874823", 00:09:52.587 "is_configured": true, 00:09:52.587 "data_offset": 2048, 00:09:52.587 "data_size": 63488 00:09:52.587 }, 00:09:52.587 { 00:09:52.587 "name": "BaseBdev4", 00:09:52.587 "uuid": "c12a89ce-88c0-5316-8559-1ed85b0f616e", 00:09:52.587 "is_configured": true, 00:09:52.587 "data_offset": 2048, 00:09:52.587 "data_size": 63488 00:09:52.587 } 00:09:52.587 ] 00:09:52.587 }' 00:09:52.587 11:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.587 11:52:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.847 11:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:52.847 11:52:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.847 11:52:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.847 [2024-12-06 11:52:50.576137] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:52.847 [2024-12-06 11:52:50.576219] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:52.847 [2024-12-06 11:52:50.578760] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.847 [2024-12-06 11:52:50.578820] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.847 [2024-12-06 11:52:50.578865] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.847 [2024-12-06 11:52:50.578874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:09:52.847 { 00:09:52.847 "results": [ 00:09:52.847 { 00:09:52.847 "job": "raid_bdev1", 00:09:52.847 "core_mask": "0x1", 00:09:52.847 "workload": "randrw", 00:09:52.847 "percentage": 50, 00:09:52.847 "status": "finished", 00:09:52.847 "queue_depth": 1, 00:09:52.847 "io_size": 131072, 00:09:52.847 "runtime": 1.368407, 00:09:52.847 "iops": 17484.56416840896, 00:09:52.847 "mibps": 2185.57052105112, 00:09:52.847 "io_failed": 1, 00:09:52.847 "io_timeout": 0, 00:09:52.847 "avg_latency_us": 79.21468557108658, 00:09:52.847 "min_latency_us": 24.593886462882097, 00:09:52.847 "max_latency_us": 1352.216593886463 00:09:52.847 } 00:09:52.847 ], 00:09:52.847 "core_count": 1 00:09:52.847 } 00:09:52.847 11:52:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.848 11:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83508 00:09:52.848 11:52:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 83508 ']' 00:09:52.848 11:52:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 83508 00:09:52.848 11:52:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:52.848 11:52:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:52.848 11:52:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83508 00:09:53.107 killing process with pid 83508 00:09:53.107 11:52:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:53.107 11:52:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:53.107 11:52:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83508' 00:09:53.107 11:52:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 83508 00:09:53.107 [2024-12-06 11:52:50.616835] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:53.107 11:52:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 83508 00:09:53.107 [2024-12-06 11:52:50.652568] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:53.368 11:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GX0mdliKZx 00:09:53.368 11:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:53.368 11:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:53.368 11:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:53.368 11:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:53.368 11:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:53.368 11:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:53.368 ************************************ 00:09:53.368 END TEST raid_write_error_test 00:09:53.368 ************************************ 00:09:53.368 11:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:53.368 00:09:53.368 real 0m3.303s 00:09:53.368 user 0m4.130s 00:09:53.368 sys 0m0.542s 00:09:53.368 11:52:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:53.368 11:52:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.368 11:52:50 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:53.368 11:52:50 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:09:53.368 11:52:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:53.368 11:52:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:53.368 11:52:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:53.368 ************************************ 00:09:53.368 START TEST raid_state_function_test 00:09:53.368 ************************************ 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:53.368 Process raid pid: 83635 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83635 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83635' 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83635 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 83635 ']' 00:09:53.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.368 11:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:53.369 11:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.369 11:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:53.369 11:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.369 [2024-12-06 11:52:51.058102] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:53.369 [2024-12-06 11:52:51.058211] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.628 [2024-12-06 11:52:51.202754] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.628 [2024-12-06 11:52:51.245965] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.628 [2024-12-06 11:52:51.287246] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.628 [2024-12-06 11:52:51.287279] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.229 11:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:54.229 11:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:54.229 11:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:54.229 11:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.229 11:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.229 [2024-12-06 11:52:51.880124] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:54.229 [2024-12-06 11:52:51.880175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:54.229 [2024-12-06 11:52:51.880187] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:54.229 [2024-12-06 11:52:51.880197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:54.229 [2024-12-06 11:52:51.880203] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:54.229 [2024-12-06 11:52:51.880215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:54.229 [2024-12-06 11:52:51.880220] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:54.229 [2024-12-06 11:52:51.880229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:54.229 11:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.229 11:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:54.229 11:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.229 11:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.229 11:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.229 11:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.229 11:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.229 11:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.229 11:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.229 11:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.229 11:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.229 11:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.229 11:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.229 11:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.229 11:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.229 11:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.229 11:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.229 "name": "Existed_Raid", 00:09:54.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.229 "strip_size_kb": 0, 00:09:54.229 "state": "configuring", 00:09:54.229 "raid_level": "raid1", 00:09:54.229 "superblock": false, 00:09:54.229 "num_base_bdevs": 4, 00:09:54.229 "num_base_bdevs_discovered": 0, 00:09:54.229 "num_base_bdevs_operational": 4, 00:09:54.229 "base_bdevs_list": [ 00:09:54.229 { 00:09:54.229 "name": "BaseBdev1", 00:09:54.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.229 "is_configured": false, 00:09:54.229 "data_offset": 0, 00:09:54.229 "data_size": 0 00:09:54.229 }, 00:09:54.229 { 00:09:54.229 "name": "BaseBdev2", 00:09:54.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.229 "is_configured": false, 00:09:54.229 "data_offset": 0, 00:09:54.229 "data_size": 0 00:09:54.229 }, 00:09:54.229 { 00:09:54.229 "name": "BaseBdev3", 00:09:54.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.229 "is_configured": false, 00:09:54.229 "data_offset": 0, 00:09:54.229 "data_size": 0 00:09:54.229 }, 00:09:54.229 { 00:09:54.229 "name": "BaseBdev4", 00:09:54.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.229 "is_configured": false, 00:09:54.229 "data_offset": 0, 00:09:54.229 "data_size": 0 00:09:54.229 } 00:09:54.229 ] 00:09:54.229 }' 00:09:54.229 11:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.229 11:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.799 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:54.799 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.799 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.799 [2024-12-06 11:52:52.347216] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:54.799 [2024-12-06 11:52:52.347343] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:54.799 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.799 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:54.799 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.799 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.799 [2024-12-06 11:52:52.359220] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:54.799 [2024-12-06 11:52:52.359328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:54.799 [2024-12-06 11:52:52.359359] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:54.799 [2024-12-06 11:52:52.359381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:54.799 [2024-12-06 11:52:52.359398] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:54.799 [2024-12-06 11:52:52.359418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:54.799 [2024-12-06 11:52:52.359435] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:54.799 [2024-12-06 11:52:52.359454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:54.799 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.799 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:54.799 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.799 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.799 [2024-12-06 11:52:52.381033] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.799 BaseBdev1 00:09:54.799 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.799 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:54.799 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:54.799 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:54.799 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:54.799 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:54.799 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:54.799 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:54.799 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.799 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.799 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.799 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:54.799 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.799 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.799 [ 00:09:54.799 { 00:09:54.799 "name": "BaseBdev1", 00:09:54.799 "aliases": [ 00:09:54.799 "8d4ce4ce-fd8c-4275-99bc-b3f0e8815b6e" 00:09:54.799 ], 00:09:54.799 "product_name": "Malloc disk", 00:09:54.799 "block_size": 512, 00:09:54.799 "num_blocks": 65536, 00:09:54.799 "uuid": "8d4ce4ce-fd8c-4275-99bc-b3f0e8815b6e", 00:09:54.799 "assigned_rate_limits": { 00:09:54.799 "rw_ios_per_sec": 0, 00:09:54.799 "rw_mbytes_per_sec": 0, 00:09:54.799 "r_mbytes_per_sec": 0, 00:09:54.799 "w_mbytes_per_sec": 0 00:09:54.799 }, 00:09:54.799 "claimed": true, 00:09:54.799 "claim_type": "exclusive_write", 00:09:54.799 "zoned": false, 00:09:54.799 "supported_io_types": { 00:09:54.799 "read": true, 00:09:54.799 "write": true, 00:09:54.799 "unmap": true, 00:09:54.799 "flush": true, 00:09:54.799 "reset": true, 00:09:54.799 "nvme_admin": false, 00:09:54.799 "nvme_io": false, 00:09:54.799 "nvme_io_md": false, 00:09:54.799 "write_zeroes": true, 00:09:54.799 "zcopy": true, 00:09:54.799 "get_zone_info": false, 00:09:54.799 "zone_management": false, 00:09:54.799 "zone_append": false, 00:09:54.799 "compare": false, 00:09:54.799 "compare_and_write": false, 00:09:54.799 "abort": true, 00:09:54.799 "seek_hole": false, 00:09:54.799 "seek_data": false, 00:09:54.799 "copy": true, 00:09:54.799 "nvme_iov_md": false 00:09:54.799 }, 00:09:54.799 "memory_domains": [ 00:09:54.799 { 00:09:54.799 "dma_device_id": "system", 00:09:54.799 "dma_device_type": 1 00:09:54.799 }, 00:09:54.799 { 00:09:54.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.799 "dma_device_type": 2 00:09:54.799 } 00:09:54.799 ], 00:09:54.799 "driver_specific": {} 00:09:54.799 } 00:09:54.799 ] 00:09:54.800 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.800 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:54.800 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:54.800 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.800 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.800 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.800 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.800 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.800 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.800 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.800 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.800 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.800 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.800 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.800 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.800 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.800 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.800 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.800 "name": "Existed_Raid", 00:09:54.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.800 "strip_size_kb": 0, 00:09:54.800 "state": "configuring", 00:09:54.800 "raid_level": "raid1", 00:09:54.800 "superblock": false, 00:09:54.800 "num_base_bdevs": 4, 00:09:54.800 "num_base_bdevs_discovered": 1, 00:09:54.800 "num_base_bdevs_operational": 4, 00:09:54.800 "base_bdevs_list": [ 00:09:54.800 { 00:09:54.800 "name": "BaseBdev1", 00:09:54.800 "uuid": "8d4ce4ce-fd8c-4275-99bc-b3f0e8815b6e", 00:09:54.800 "is_configured": true, 00:09:54.800 "data_offset": 0, 00:09:54.800 "data_size": 65536 00:09:54.800 }, 00:09:54.800 { 00:09:54.800 "name": "BaseBdev2", 00:09:54.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.800 "is_configured": false, 00:09:54.800 "data_offset": 0, 00:09:54.800 "data_size": 0 00:09:54.800 }, 00:09:54.800 { 00:09:54.800 "name": "BaseBdev3", 00:09:54.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.800 "is_configured": false, 00:09:54.800 "data_offset": 0, 00:09:54.800 "data_size": 0 00:09:54.800 }, 00:09:54.800 { 00:09:54.800 "name": "BaseBdev4", 00:09:54.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.800 "is_configured": false, 00:09:54.800 "data_offset": 0, 00:09:54.800 "data_size": 0 00:09:54.800 } 00:09:54.800 ] 00:09:54.800 }' 00:09:54.800 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.800 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.369 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:55.369 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.369 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.369 [2024-12-06 11:52:52.860262] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:55.369 [2024-12-06 11:52:52.860312] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:55.369 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.369 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:55.369 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.370 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.370 [2024-12-06 11:52:52.868308] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:55.370 [2024-12-06 11:52:52.870088] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:55.370 [2024-12-06 11:52:52.870195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:55.370 [2024-12-06 11:52:52.870208] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:55.370 [2024-12-06 11:52:52.870217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:55.370 [2024-12-06 11:52:52.870223] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:55.370 [2024-12-06 11:52:52.870242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:55.370 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.370 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:55.370 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:55.370 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:55.370 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.370 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.370 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.370 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.370 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.370 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.370 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.370 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.370 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.370 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.370 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.370 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.370 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.370 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.370 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.370 "name": "Existed_Raid", 00:09:55.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.370 "strip_size_kb": 0, 00:09:55.370 "state": "configuring", 00:09:55.370 "raid_level": "raid1", 00:09:55.370 "superblock": false, 00:09:55.370 "num_base_bdevs": 4, 00:09:55.370 "num_base_bdevs_discovered": 1, 00:09:55.370 "num_base_bdevs_operational": 4, 00:09:55.370 "base_bdevs_list": [ 00:09:55.370 { 00:09:55.370 "name": "BaseBdev1", 00:09:55.370 "uuid": "8d4ce4ce-fd8c-4275-99bc-b3f0e8815b6e", 00:09:55.370 "is_configured": true, 00:09:55.370 "data_offset": 0, 00:09:55.370 "data_size": 65536 00:09:55.370 }, 00:09:55.370 { 00:09:55.370 "name": "BaseBdev2", 00:09:55.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.370 "is_configured": false, 00:09:55.370 "data_offset": 0, 00:09:55.370 "data_size": 0 00:09:55.370 }, 00:09:55.370 { 00:09:55.370 "name": "BaseBdev3", 00:09:55.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.370 "is_configured": false, 00:09:55.370 "data_offset": 0, 00:09:55.370 "data_size": 0 00:09:55.370 }, 00:09:55.370 { 00:09:55.370 "name": "BaseBdev4", 00:09:55.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.370 "is_configured": false, 00:09:55.370 "data_offset": 0, 00:09:55.370 "data_size": 0 00:09:55.370 } 00:09:55.370 ] 00:09:55.370 }' 00:09:55.370 11:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.370 11:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.629 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:55.629 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.629 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.629 [2024-12-06 11:52:53.241829] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:55.629 BaseBdev2 00:09:55.629 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.629 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:55.629 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:55.629 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:55.629 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:55.629 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:55.629 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:55.629 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:55.629 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.629 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.630 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.630 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:55.630 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.630 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.630 [ 00:09:55.630 { 00:09:55.630 "name": "BaseBdev2", 00:09:55.630 "aliases": [ 00:09:55.630 "192c64dd-51b1-4635-8c58-c39fabab6f3c" 00:09:55.630 ], 00:09:55.630 "product_name": "Malloc disk", 00:09:55.630 "block_size": 512, 00:09:55.630 "num_blocks": 65536, 00:09:55.630 "uuid": "192c64dd-51b1-4635-8c58-c39fabab6f3c", 00:09:55.630 "assigned_rate_limits": { 00:09:55.630 "rw_ios_per_sec": 0, 00:09:55.630 "rw_mbytes_per_sec": 0, 00:09:55.630 "r_mbytes_per_sec": 0, 00:09:55.630 "w_mbytes_per_sec": 0 00:09:55.630 }, 00:09:55.630 "claimed": true, 00:09:55.630 "claim_type": "exclusive_write", 00:09:55.630 "zoned": false, 00:09:55.630 "supported_io_types": { 00:09:55.630 "read": true, 00:09:55.630 "write": true, 00:09:55.630 "unmap": true, 00:09:55.630 "flush": true, 00:09:55.630 "reset": true, 00:09:55.630 "nvme_admin": false, 00:09:55.630 "nvme_io": false, 00:09:55.630 "nvme_io_md": false, 00:09:55.630 "write_zeroes": true, 00:09:55.630 "zcopy": true, 00:09:55.630 "get_zone_info": false, 00:09:55.630 "zone_management": false, 00:09:55.630 "zone_append": false, 00:09:55.630 "compare": false, 00:09:55.630 "compare_and_write": false, 00:09:55.630 "abort": true, 00:09:55.630 "seek_hole": false, 00:09:55.630 "seek_data": false, 00:09:55.630 "copy": true, 00:09:55.630 "nvme_iov_md": false 00:09:55.630 }, 00:09:55.630 "memory_domains": [ 00:09:55.630 { 00:09:55.630 "dma_device_id": "system", 00:09:55.630 "dma_device_type": 1 00:09:55.630 }, 00:09:55.630 { 00:09:55.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.630 "dma_device_type": 2 00:09:55.630 } 00:09:55.630 ], 00:09:55.630 "driver_specific": {} 00:09:55.630 } 00:09:55.630 ] 00:09:55.630 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.630 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:55.630 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:55.630 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:55.630 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:55.630 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.630 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.630 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.630 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.630 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.630 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.630 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.630 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.630 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.630 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.630 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.630 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.630 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.630 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.630 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.630 "name": "Existed_Raid", 00:09:55.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.630 "strip_size_kb": 0, 00:09:55.630 "state": "configuring", 00:09:55.630 "raid_level": "raid1", 00:09:55.630 "superblock": false, 00:09:55.630 "num_base_bdevs": 4, 00:09:55.630 "num_base_bdevs_discovered": 2, 00:09:55.630 "num_base_bdevs_operational": 4, 00:09:55.630 "base_bdevs_list": [ 00:09:55.630 { 00:09:55.630 "name": "BaseBdev1", 00:09:55.630 "uuid": "8d4ce4ce-fd8c-4275-99bc-b3f0e8815b6e", 00:09:55.630 "is_configured": true, 00:09:55.630 "data_offset": 0, 00:09:55.630 "data_size": 65536 00:09:55.630 }, 00:09:55.630 { 00:09:55.630 "name": "BaseBdev2", 00:09:55.630 "uuid": "192c64dd-51b1-4635-8c58-c39fabab6f3c", 00:09:55.630 "is_configured": true, 00:09:55.630 "data_offset": 0, 00:09:55.630 "data_size": 65536 00:09:55.630 }, 00:09:55.630 { 00:09:55.630 "name": "BaseBdev3", 00:09:55.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.630 "is_configured": false, 00:09:55.630 "data_offset": 0, 00:09:55.630 "data_size": 0 00:09:55.630 }, 00:09:55.630 { 00:09:55.630 "name": "BaseBdev4", 00:09:55.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.630 "is_configured": false, 00:09:55.630 "data_offset": 0, 00:09:55.630 "data_size": 0 00:09:55.630 } 00:09:55.630 ] 00:09:55.630 }' 00:09:55.630 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.630 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.198 [2024-12-06 11:52:53.719892] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:56.198 BaseBdev3 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.198 [ 00:09:56.198 { 00:09:56.198 "name": "BaseBdev3", 00:09:56.198 "aliases": [ 00:09:56.198 "3a11d316-b021-4077-b89e-59aa82ac4cb7" 00:09:56.198 ], 00:09:56.198 "product_name": "Malloc disk", 00:09:56.198 "block_size": 512, 00:09:56.198 "num_blocks": 65536, 00:09:56.198 "uuid": "3a11d316-b021-4077-b89e-59aa82ac4cb7", 00:09:56.198 "assigned_rate_limits": { 00:09:56.198 "rw_ios_per_sec": 0, 00:09:56.198 "rw_mbytes_per_sec": 0, 00:09:56.198 "r_mbytes_per_sec": 0, 00:09:56.198 "w_mbytes_per_sec": 0 00:09:56.198 }, 00:09:56.198 "claimed": true, 00:09:56.198 "claim_type": "exclusive_write", 00:09:56.198 "zoned": false, 00:09:56.198 "supported_io_types": { 00:09:56.198 "read": true, 00:09:56.198 "write": true, 00:09:56.198 "unmap": true, 00:09:56.198 "flush": true, 00:09:56.198 "reset": true, 00:09:56.198 "nvme_admin": false, 00:09:56.198 "nvme_io": false, 00:09:56.198 "nvme_io_md": false, 00:09:56.198 "write_zeroes": true, 00:09:56.198 "zcopy": true, 00:09:56.198 "get_zone_info": false, 00:09:56.198 "zone_management": false, 00:09:56.198 "zone_append": false, 00:09:56.198 "compare": false, 00:09:56.198 "compare_and_write": false, 00:09:56.198 "abort": true, 00:09:56.198 "seek_hole": false, 00:09:56.198 "seek_data": false, 00:09:56.198 "copy": true, 00:09:56.198 "nvme_iov_md": false 00:09:56.198 }, 00:09:56.198 "memory_domains": [ 00:09:56.198 { 00:09:56.198 "dma_device_id": "system", 00:09:56.198 "dma_device_type": 1 00:09:56.198 }, 00:09:56.198 { 00:09:56.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.198 "dma_device_type": 2 00:09:56.198 } 00:09:56.198 ], 00:09:56.198 "driver_specific": {} 00:09:56.198 } 00:09:56.198 ] 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.198 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.198 "name": "Existed_Raid", 00:09:56.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.198 "strip_size_kb": 0, 00:09:56.198 "state": "configuring", 00:09:56.198 "raid_level": "raid1", 00:09:56.198 "superblock": false, 00:09:56.198 "num_base_bdevs": 4, 00:09:56.198 "num_base_bdevs_discovered": 3, 00:09:56.198 "num_base_bdevs_operational": 4, 00:09:56.198 "base_bdevs_list": [ 00:09:56.198 { 00:09:56.198 "name": "BaseBdev1", 00:09:56.198 "uuid": "8d4ce4ce-fd8c-4275-99bc-b3f0e8815b6e", 00:09:56.198 "is_configured": true, 00:09:56.198 "data_offset": 0, 00:09:56.198 "data_size": 65536 00:09:56.198 }, 00:09:56.198 { 00:09:56.198 "name": "BaseBdev2", 00:09:56.199 "uuid": "192c64dd-51b1-4635-8c58-c39fabab6f3c", 00:09:56.199 "is_configured": true, 00:09:56.199 "data_offset": 0, 00:09:56.199 "data_size": 65536 00:09:56.199 }, 00:09:56.199 { 00:09:56.199 "name": "BaseBdev3", 00:09:56.199 "uuid": "3a11d316-b021-4077-b89e-59aa82ac4cb7", 00:09:56.199 "is_configured": true, 00:09:56.199 "data_offset": 0, 00:09:56.199 "data_size": 65536 00:09:56.199 }, 00:09:56.199 { 00:09:56.199 "name": "BaseBdev4", 00:09:56.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.199 "is_configured": false, 00:09:56.199 "data_offset": 0, 00:09:56.199 "data_size": 0 00:09:56.199 } 00:09:56.199 ] 00:09:56.199 }' 00:09:56.199 11:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.199 11:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.767 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:56.767 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.767 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.767 [2024-12-06 11:52:54.241904] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:56.767 [2024-12-06 11:52:54.241950] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:56.767 [2024-12-06 11:52:54.241958] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:56.767 [2024-12-06 11:52:54.242226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:56.767 [2024-12-06 11:52:54.242410] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:56.767 [2024-12-06 11:52:54.242427] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:56.767 [2024-12-06 11:52:54.242618] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.767 BaseBdev4 00:09:56.767 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.767 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:56.767 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:56.767 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:56.767 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:56.767 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:56.767 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:56.767 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:56.767 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.767 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.767 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.767 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:56.767 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.767 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.767 [ 00:09:56.767 { 00:09:56.767 "name": "BaseBdev4", 00:09:56.767 "aliases": [ 00:09:56.767 "3d60cb35-9f20-42e0-b3fd-674edcf135b3" 00:09:56.767 ], 00:09:56.767 "product_name": "Malloc disk", 00:09:56.767 "block_size": 512, 00:09:56.767 "num_blocks": 65536, 00:09:56.767 "uuid": "3d60cb35-9f20-42e0-b3fd-674edcf135b3", 00:09:56.767 "assigned_rate_limits": { 00:09:56.767 "rw_ios_per_sec": 0, 00:09:56.767 "rw_mbytes_per_sec": 0, 00:09:56.767 "r_mbytes_per_sec": 0, 00:09:56.767 "w_mbytes_per_sec": 0 00:09:56.767 }, 00:09:56.767 "claimed": true, 00:09:56.767 "claim_type": "exclusive_write", 00:09:56.767 "zoned": false, 00:09:56.767 "supported_io_types": { 00:09:56.767 "read": true, 00:09:56.767 "write": true, 00:09:56.767 "unmap": true, 00:09:56.768 "flush": true, 00:09:56.768 "reset": true, 00:09:56.768 "nvme_admin": false, 00:09:56.768 "nvme_io": false, 00:09:56.768 "nvme_io_md": false, 00:09:56.768 "write_zeroes": true, 00:09:56.768 "zcopy": true, 00:09:56.768 "get_zone_info": false, 00:09:56.768 "zone_management": false, 00:09:56.768 "zone_append": false, 00:09:56.768 "compare": false, 00:09:56.768 "compare_and_write": false, 00:09:56.768 "abort": true, 00:09:56.768 "seek_hole": false, 00:09:56.768 "seek_data": false, 00:09:56.768 "copy": true, 00:09:56.768 "nvme_iov_md": false 00:09:56.768 }, 00:09:56.768 "memory_domains": [ 00:09:56.768 { 00:09:56.768 "dma_device_id": "system", 00:09:56.768 "dma_device_type": 1 00:09:56.768 }, 00:09:56.768 { 00:09:56.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.768 "dma_device_type": 2 00:09:56.768 } 00:09:56.768 ], 00:09:56.768 "driver_specific": {} 00:09:56.768 } 00:09:56.768 ] 00:09:56.768 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.768 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:56.768 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:56.768 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:56.768 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:09:56.768 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.768 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.768 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.768 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.768 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.768 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.768 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.768 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.768 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.768 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.768 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.768 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.768 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.768 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.768 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.768 "name": "Existed_Raid", 00:09:56.768 "uuid": "de01fc4f-9566-4d62-9116-86772a0b9ea3", 00:09:56.768 "strip_size_kb": 0, 00:09:56.768 "state": "online", 00:09:56.768 "raid_level": "raid1", 00:09:56.768 "superblock": false, 00:09:56.768 "num_base_bdevs": 4, 00:09:56.768 "num_base_bdevs_discovered": 4, 00:09:56.768 "num_base_bdevs_operational": 4, 00:09:56.768 "base_bdevs_list": [ 00:09:56.768 { 00:09:56.768 "name": "BaseBdev1", 00:09:56.768 "uuid": "8d4ce4ce-fd8c-4275-99bc-b3f0e8815b6e", 00:09:56.768 "is_configured": true, 00:09:56.768 "data_offset": 0, 00:09:56.768 "data_size": 65536 00:09:56.768 }, 00:09:56.768 { 00:09:56.768 "name": "BaseBdev2", 00:09:56.768 "uuid": "192c64dd-51b1-4635-8c58-c39fabab6f3c", 00:09:56.768 "is_configured": true, 00:09:56.768 "data_offset": 0, 00:09:56.768 "data_size": 65536 00:09:56.768 }, 00:09:56.768 { 00:09:56.768 "name": "BaseBdev3", 00:09:56.768 "uuid": "3a11d316-b021-4077-b89e-59aa82ac4cb7", 00:09:56.768 "is_configured": true, 00:09:56.768 "data_offset": 0, 00:09:56.768 "data_size": 65536 00:09:56.768 }, 00:09:56.768 { 00:09:56.768 "name": "BaseBdev4", 00:09:56.768 "uuid": "3d60cb35-9f20-42e0-b3fd-674edcf135b3", 00:09:56.768 "is_configured": true, 00:09:56.768 "data_offset": 0, 00:09:56.768 "data_size": 65536 00:09:56.768 } 00:09:56.768 ] 00:09:56.768 }' 00:09:56.768 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.768 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.027 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:57.027 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:57.027 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:57.027 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:57.027 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:57.027 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:57.027 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:57.027 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:57.027 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.027 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.027 [2024-12-06 11:52:54.693463] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:57.027 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.027 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:57.027 "name": "Existed_Raid", 00:09:57.027 "aliases": [ 00:09:57.028 "de01fc4f-9566-4d62-9116-86772a0b9ea3" 00:09:57.028 ], 00:09:57.028 "product_name": "Raid Volume", 00:09:57.028 "block_size": 512, 00:09:57.028 "num_blocks": 65536, 00:09:57.028 "uuid": "de01fc4f-9566-4d62-9116-86772a0b9ea3", 00:09:57.028 "assigned_rate_limits": { 00:09:57.028 "rw_ios_per_sec": 0, 00:09:57.028 "rw_mbytes_per_sec": 0, 00:09:57.028 "r_mbytes_per_sec": 0, 00:09:57.028 "w_mbytes_per_sec": 0 00:09:57.028 }, 00:09:57.028 "claimed": false, 00:09:57.028 "zoned": false, 00:09:57.028 "supported_io_types": { 00:09:57.028 "read": true, 00:09:57.028 "write": true, 00:09:57.028 "unmap": false, 00:09:57.028 "flush": false, 00:09:57.028 "reset": true, 00:09:57.028 "nvme_admin": false, 00:09:57.028 "nvme_io": false, 00:09:57.028 "nvme_io_md": false, 00:09:57.028 "write_zeroes": true, 00:09:57.028 "zcopy": false, 00:09:57.028 "get_zone_info": false, 00:09:57.028 "zone_management": false, 00:09:57.028 "zone_append": false, 00:09:57.028 "compare": false, 00:09:57.028 "compare_and_write": false, 00:09:57.028 "abort": false, 00:09:57.028 "seek_hole": false, 00:09:57.028 "seek_data": false, 00:09:57.028 "copy": false, 00:09:57.028 "nvme_iov_md": false 00:09:57.028 }, 00:09:57.028 "memory_domains": [ 00:09:57.028 { 00:09:57.028 "dma_device_id": "system", 00:09:57.028 "dma_device_type": 1 00:09:57.028 }, 00:09:57.028 { 00:09:57.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.028 "dma_device_type": 2 00:09:57.028 }, 00:09:57.028 { 00:09:57.028 "dma_device_id": "system", 00:09:57.028 "dma_device_type": 1 00:09:57.028 }, 00:09:57.028 { 00:09:57.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.028 "dma_device_type": 2 00:09:57.028 }, 00:09:57.028 { 00:09:57.028 "dma_device_id": "system", 00:09:57.028 "dma_device_type": 1 00:09:57.028 }, 00:09:57.028 { 00:09:57.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.028 "dma_device_type": 2 00:09:57.028 }, 00:09:57.028 { 00:09:57.028 "dma_device_id": "system", 00:09:57.028 "dma_device_type": 1 00:09:57.028 }, 00:09:57.028 { 00:09:57.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.028 "dma_device_type": 2 00:09:57.028 } 00:09:57.028 ], 00:09:57.028 "driver_specific": { 00:09:57.028 "raid": { 00:09:57.028 "uuid": "de01fc4f-9566-4d62-9116-86772a0b9ea3", 00:09:57.028 "strip_size_kb": 0, 00:09:57.028 "state": "online", 00:09:57.028 "raid_level": "raid1", 00:09:57.028 "superblock": false, 00:09:57.028 "num_base_bdevs": 4, 00:09:57.028 "num_base_bdevs_discovered": 4, 00:09:57.028 "num_base_bdevs_operational": 4, 00:09:57.028 "base_bdevs_list": [ 00:09:57.028 { 00:09:57.028 "name": "BaseBdev1", 00:09:57.028 "uuid": "8d4ce4ce-fd8c-4275-99bc-b3f0e8815b6e", 00:09:57.028 "is_configured": true, 00:09:57.028 "data_offset": 0, 00:09:57.028 "data_size": 65536 00:09:57.028 }, 00:09:57.028 { 00:09:57.028 "name": "BaseBdev2", 00:09:57.028 "uuid": "192c64dd-51b1-4635-8c58-c39fabab6f3c", 00:09:57.028 "is_configured": true, 00:09:57.028 "data_offset": 0, 00:09:57.028 "data_size": 65536 00:09:57.028 }, 00:09:57.028 { 00:09:57.028 "name": "BaseBdev3", 00:09:57.028 "uuid": "3a11d316-b021-4077-b89e-59aa82ac4cb7", 00:09:57.028 "is_configured": true, 00:09:57.028 "data_offset": 0, 00:09:57.028 "data_size": 65536 00:09:57.028 }, 00:09:57.028 { 00:09:57.028 "name": "BaseBdev4", 00:09:57.028 "uuid": "3d60cb35-9f20-42e0-b3fd-674edcf135b3", 00:09:57.028 "is_configured": true, 00:09:57.028 "data_offset": 0, 00:09:57.028 "data_size": 65536 00:09:57.028 } 00:09:57.028 ] 00:09:57.028 } 00:09:57.028 } 00:09:57.028 }' 00:09:57.028 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:57.028 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:57.028 BaseBdev2 00:09:57.028 BaseBdev3 00:09:57.028 BaseBdev4' 00:09:57.028 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.288 11:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.288 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.288 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.288 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:57.288 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.288 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.288 [2024-12-06 11:52:55.008668] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:57.288 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.288 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:57.288 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:57.288 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:57.288 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:57.288 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:57.288 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:57.288 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.288 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.288 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.288 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.288 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.288 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.288 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.288 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.288 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.288 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.288 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.288 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.288 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.546 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.546 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.546 "name": "Existed_Raid", 00:09:57.546 "uuid": "de01fc4f-9566-4d62-9116-86772a0b9ea3", 00:09:57.546 "strip_size_kb": 0, 00:09:57.546 "state": "online", 00:09:57.546 "raid_level": "raid1", 00:09:57.546 "superblock": false, 00:09:57.546 "num_base_bdevs": 4, 00:09:57.546 "num_base_bdevs_discovered": 3, 00:09:57.546 "num_base_bdevs_operational": 3, 00:09:57.546 "base_bdevs_list": [ 00:09:57.546 { 00:09:57.546 "name": null, 00:09:57.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.546 "is_configured": false, 00:09:57.546 "data_offset": 0, 00:09:57.546 "data_size": 65536 00:09:57.546 }, 00:09:57.546 { 00:09:57.546 "name": "BaseBdev2", 00:09:57.546 "uuid": "192c64dd-51b1-4635-8c58-c39fabab6f3c", 00:09:57.546 "is_configured": true, 00:09:57.546 "data_offset": 0, 00:09:57.547 "data_size": 65536 00:09:57.547 }, 00:09:57.547 { 00:09:57.547 "name": "BaseBdev3", 00:09:57.547 "uuid": "3a11d316-b021-4077-b89e-59aa82ac4cb7", 00:09:57.547 "is_configured": true, 00:09:57.547 "data_offset": 0, 00:09:57.547 "data_size": 65536 00:09:57.547 }, 00:09:57.547 { 00:09:57.547 "name": "BaseBdev4", 00:09:57.547 "uuid": "3d60cb35-9f20-42e0-b3fd-674edcf135b3", 00:09:57.547 "is_configured": true, 00:09:57.547 "data_offset": 0, 00:09:57.547 "data_size": 65536 00:09:57.547 } 00:09:57.547 ] 00:09:57.547 }' 00:09:57.547 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.547 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.805 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:57.805 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:57.805 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.805 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.805 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.806 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:57.806 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.806 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:57.806 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:57.806 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:57.806 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.806 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.806 [2024-12-06 11:52:55.510902] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:57.806 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.806 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:57.806 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:57.806 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.806 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.806 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:57.806 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.806 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.065 [2024-12-06 11:52:55.577796] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.065 [2024-12-06 11:52:55.648792] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:58.065 [2024-12-06 11:52:55.648872] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.065 [2024-12-06 11:52:55.660180] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.065 [2024-12-06 11:52:55.660326] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:58.065 [2024-12-06 11:52:55.660352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.065 BaseBdev2 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:58.065 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.066 [ 00:09:58.066 { 00:09:58.066 "name": "BaseBdev2", 00:09:58.066 "aliases": [ 00:09:58.066 "28414645-9132-4297-9284-5c165b33bd13" 00:09:58.066 ], 00:09:58.066 "product_name": "Malloc disk", 00:09:58.066 "block_size": 512, 00:09:58.066 "num_blocks": 65536, 00:09:58.066 "uuid": "28414645-9132-4297-9284-5c165b33bd13", 00:09:58.066 "assigned_rate_limits": { 00:09:58.066 "rw_ios_per_sec": 0, 00:09:58.066 "rw_mbytes_per_sec": 0, 00:09:58.066 "r_mbytes_per_sec": 0, 00:09:58.066 "w_mbytes_per_sec": 0 00:09:58.066 }, 00:09:58.066 "claimed": false, 00:09:58.066 "zoned": false, 00:09:58.066 "supported_io_types": { 00:09:58.066 "read": true, 00:09:58.066 "write": true, 00:09:58.066 "unmap": true, 00:09:58.066 "flush": true, 00:09:58.066 "reset": true, 00:09:58.066 "nvme_admin": false, 00:09:58.066 "nvme_io": false, 00:09:58.066 "nvme_io_md": false, 00:09:58.066 "write_zeroes": true, 00:09:58.066 "zcopy": true, 00:09:58.066 "get_zone_info": false, 00:09:58.066 "zone_management": false, 00:09:58.066 "zone_append": false, 00:09:58.066 "compare": false, 00:09:58.066 "compare_and_write": false, 00:09:58.066 "abort": true, 00:09:58.066 "seek_hole": false, 00:09:58.066 "seek_data": false, 00:09:58.066 "copy": true, 00:09:58.066 "nvme_iov_md": false 00:09:58.066 }, 00:09:58.066 "memory_domains": [ 00:09:58.066 { 00:09:58.066 "dma_device_id": "system", 00:09:58.066 "dma_device_type": 1 00:09:58.066 }, 00:09:58.066 { 00:09:58.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.066 "dma_device_type": 2 00:09:58.066 } 00:09:58.066 ], 00:09:58.066 "driver_specific": {} 00:09:58.066 } 00:09:58.066 ] 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.066 BaseBdev3 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.066 [ 00:09:58.066 { 00:09:58.066 "name": "BaseBdev3", 00:09:58.066 "aliases": [ 00:09:58.066 "1eb8b18c-34d4-4969-89dd-c25ba7b5afa4" 00:09:58.066 ], 00:09:58.066 "product_name": "Malloc disk", 00:09:58.066 "block_size": 512, 00:09:58.066 "num_blocks": 65536, 00:09:58.066 "uuid": "1eb8b18c-34d4-4969-89dd-c25ba7b5afa4", 00:09:58.066 "assigned_rate_limits": { 00:09:58.066 "rw_ios_per_sec": 0, 00:09:58.066 "rw_mbytes_per_sec": 0, 00:09:58.066 "r_mbytes_per_sec": 0, 00:09:58.066 "w_mbytes_per_sec": 0 00:09:58.066 }, 00:09:58.066 "claimed": false, 00:09:58.066 "zoned": false, 00:09:58.066 "supported_io_types": { 00:09:58.066 "read": true, 00:09:58.066 "write": true, 00:09:58.066 "unmap": true, 00:09:58.066 "flush": true, 00:09:58.066 "reset": true, 00:09:58.066 "nvme_admin": false, 00:09:58.066 "nvme_io": false, 00:09:58.066 "nvme_io_md": false, 00:09:58.066 "write_zeroes": true, 00:09:58.066 "zcopy": true, 00:09:58.066 "get_zone_info": false, 00:09:58.066 "zone_management": false, 00:09:58.066 "zone_append": false, 00:09:58.066 "compare": false, 00:09:58.066 "compare_and_write": false, 00:09:58.066 "abort": true, 00:09:58.066 "seek_hole": false, 00:09:58.066 "seek_data": false, 00:09:58.066 "copy": true, 00:09:58.066 "nvme_iov_md": false 00:09:58.066 }, 00:09:58.066 "memory_domains": [ 00:09:58.066 { 00:09:58.066 "dma_device_id": "system", 00:09:58.066 "dma_device_type": 1 00:09:58.066 }, 00:09:58.066 { 00:09:58.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.066 "dma_device_type": 2 00:09:58.066 } 00:09:58.066 ], 00:09:58.066 "driver_specific": {} 00:09:58.066 } 00:09:58.066 ] 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.066 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.326 BaseBdev4 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.326 [ 00:09:58.326 { 00:09:58.326 "name": "BaseBdev4", 00:09:58.326 "aliases": [ 00:09:58.326 "71dd912f-12c4-4095-8d6e-05dd99d93c6d" 00:09:58.326 ], 00:09:58.326 "product_name": "Malloc disk", 00:09:58.326 "block_size": 512, 00:09:58.326 "num_blocks": 65536, 00:09:58.326 "uuid": "71dd912f-12c4-4095-8d6e-05dd99d93c6d", 00:09:58.326 "assigned_rate_limits": { 00:09:58.326 "rw_ios_per_sec": 0, 00:09:58.326 "rw_mbytes_per_sec": 0, 00:09:58.326 "r_mbytes_per_sec": 0, 00:09:58.326 "w_mbytes_per_sec": 0 00:09:58.326 }, 00:09:58.326 "claimed": false, 00:09:58.326 "zoned": false, 00:09:58.326 "supported_io_types": { 00:09:58.326 "read": true, 00:09:58.326 "write": true, 00:09:58.326 "unmap": true, 00:09:58.326 "flush": true, 00:09:58.326 "reset": true, 00:09:58.326 "nvme_admin": false, 00:09:58.326 "nvme_io": false, 00:09:58.326 "nvme_io_md": false, 00:09:58.326 "write_zeroes": true, 00:09:58.326 "zcopy": true, 00:09:58.326 "get_zone_info": false, 00:09:58.326 "zone_management": false, 00:09:58.326 "zone_append": false, 00:09:58.326 "compare": false, 00:09:58.326 "compare_and_write": false, 00:09:58.326 "abort": true, 00:09:58.326 "seek_hole": false, 00:09:58.326 "seek_data": false, 00:09:58.326 "copy": true, 00:09:58.326 "nvme_iov_md": false 00:09:58.326 }, 00:09:58.326 "memory_domains": [ 00:09:58.326 { 00:09:58.326 "dma_device_id": "system", 00:09:58.326 "dma_device_type": 1 00:09:58.326 }, 00:09:58.326 { 00:09:58.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.326 "dma_device_type": 2 00:09:58.326 } 00:09:58.326 ], 00:09:58.326 "driver_specific": {} 00:09:58.326 } 00:09:58.326 ] 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.326 [2024-12-06 11:52:55.863251] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.326 [2024-12-06 11:52:55.863356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.326 [2024-12-06 11:52:55.863394] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:58.326 [2024-12-06 11:52:55.865104] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:58.326 [2024-12-06 11:52:55.865199] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.326 "name": "Existed_Raid", 00:09:58.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.326 "strip_size_kb": 0, 00:09:58.326 "state": "configuring", 00:09:58.326 "raid_level": "raid1", 00:09:58.326 "superblock": false, 00:09:58.326 "num_base_bdevs": 4, 00:09:58.326 "num_base_bdevs_discovered": 3, 00:09:58.326 "num_base_bdevs_operational": 4, 00:09:58.326 "base_bdevs_list": [ 00:09:58.326 { 00:09:58.326 "name": "BaseBdev1", 00:09:58.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.326 "is_configured": false, 00:09:58.326 "data_offset": 0, 00:09:58.326 "data_size": 0 00:09:58.326 }, 00:09:58.326 { 00:09:58.326 "name": "BaseBdev2", 00:09:58.326 "uuid": "28414645-9132-4297-9284-5c165b33bd13", 00:09:58.326 "is_configured": true, 00:09:58.326 "data_offset": 0, 00:09:58.326 "data_size": 65536 00:09:58.326 }, 00:09:58.326 { 00:09:58.326 "name": "BaseBdev3", 00:09:58.326 "uuid": "1eb8b18c-34d4-4969-89dd-c25ba7b5afa4", 00:09:58.326 "is_configured": true, 00:09:58.326 "data_offset": 0, 00:09:58.326 "data_size": 65536 00:09:58.326 }, 00:09:58.326 { 00:09:58.326 "name": "BaseBdev4", 00:09:58.326 "uuid": "71dd912f-12c4-4095-8d6e-05dd99d93c6d", 00:09:58.326 "is_configured": true, 00:09:58.326 "data_offset": 0, 00:09:58.326 "data_size": 65536 00:09:58.326 } 00:09:58.326 ] 00:09:58.326 }' 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.326 11:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.585 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:58.585 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.585 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.585 [2024-12-06 11:52:56.246564] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:58.585 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.585 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:58.585 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.585 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.585 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.585 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.585 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.585 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.585 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.585 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.585 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.585 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.585 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.585 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.585 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.585 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.585 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.585 "name": "Existed_Raid", 00:09:58.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.585 "strip_size_kb": 0, 00:09:58.585 "state": "configuring", 00:09:58.585 "raid_level": "raid1", 00:09:58.585 "superblock": false, 00:09:58.585 "num_base_bdevs": 4, 00:09:58.585 "num_base_bdevs_discovered": 2, 00:09:58.585 "num_base_bdevs_operational": 4, 00:09:58.585 "base_bdevs_list": [ 00:09:58.585 { 00:09:58.585 "name": "BaseBdev1", 00:09:58.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.585 "is_configured": false, 00:09:58.585 "data_offset": 0, 00:09:58.585 "data_size": 0 00:09:58.585 }, 00:09:58.585 { 00:09:58.585 "name": null, 00:09:58.585 "uuid": "28414645-9132-4297-9284-5c165b33bd13", 00:09:58.585 "is_configured": false, 00:09:58.585 "data_offset": 0, 00:09:58.585 "data_size": 65536 00:09:58.585 }, 00:09:58.585 { 00:09:58.585 "name": "BaseBdev3", 00:09:58.585 "uuid": "1eb8b18c-34d4-4969-89dd-c25ba7b5afa4", 00:09:58.585 "is_configured": true, 00:09:58.585 "data_offset": 0, 00:09:58.585 "data_size": 65536 00:09:58.585 }, 00:09:58.585 { 00:09:58.585 "name": "BaseBdev4", 00:09:58.585 "uuid": "71dd912f-12c4-4095-8d6e-05dd99d93c6d", 00:09:58.585 "is_configured": true, 00:09:58.585 "data_offset": 0, 00:09:58.585 "data_size": 65536 00:09:58.585 } 00:09:58.585 ] 00:09:58.585 }' 00:09:58.585 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.585 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.153 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:59.153 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.153 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.154 [2024-12-06 11:52:56.772681] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.154 BaseBdev1 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.154 [ 00:09:59.154 { 00:09:59.154 "name": "BaseBdev1", 00:09:59.154 "aliases": [ 00:09:59.154 "60ef842b-9af2-4285-b99d-16d83b87a896" 00:09:59.154 ], 00:09:59.154 "product_name": "Malloc disk", 00:09:59.154 "block_size": 512, 00:09:59.154 "num_blocks": 65536, 00:09:59.154 "uuid": "60ef842b-9af2-4285-b99d-16d83b87a896", 00:09:59.154 "assigned_rate_limits": { 00:09:59.154 "rw_ios_per_sec": 0, 00:09:59.154 "rw_mbytes_per_sec": 0, 00:09:59.154 "r_mbytes_per_sec": 0, 00:09:59.154 "w_mbytes_per_sec": 0 00:09:59.154 }, 00:09:59.154 "claimed": true, 00:09:59.154 "claim_type": "exclusive_write", 00:09:59.154 "zoned": false, 00:09:59.154 "supported_io_types": { 00:09:59.154 "read": true, 00:09:59.154 "write": true, 00:09:59.154 "unmap": true, 00:09:59.154 "flush": true, 00:09:59.154 "reset": true, 00:09:59.154 "nvme_admin": false, 00:09:59.154 "nvme_io": false, 00:09:59.154 "nvme_io_md": false, 00:09:59.154 "write_zeroes": true, 00:09:59.154 "zcopy": true, 00:09:59.154 "get_zone_info": false, 00:09:59.154 "zone_management": false, 00:09:59.154 "zone_append": false, 00:09:59.154 "compare": false, 00:09:59.154 "compare_and_write": false, 00:09:59.154 "abort": true, 00:09:59.154 "seek_hole": false, 00:09:59.154 "seek_data": false, 00:09:59.154 "copy": true, 00:09:59.154 "nvme_iov_md": false 00:09:59.154 }, 00:09:59.154 "memory_domains": [ 00:09:59.154 { 00:09:59.154 "dma_device_id": "system", 00:09:59.154 "dma_device_type": 1 00:09:59.154 }, 00:09:59.154 { 00:09:59.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.154 "dma_device_type": 2 00:09:59.154 } 00:09:59.154 ], 00:09:59.154 "driver_specific": {} 00:09:59.154 } 00:09:59.154 ] 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.154 "name": "Existed_Raid", 00:09:59.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.154 "strip_size_kb": 0, 00:09:59.154 "state": "configuring", 00:09:59.154 "raid_level": "raid1", 00:09:59.154 "superblock": false, 00:09:59.154 "num_base_bdevs": 4, 00:09:59.154 "num_base_bdevs_discovered": 3, 00:09:59.154 "num_base_bdevs_operational": 4, 00:09:59.154 "base_bdevs_list": [ 00:09:59.154 { 00:09:59.154 "name": "BaseBdev1", 00:09:59.154 "uuid": "60ef842b-9af2-4285-b99d-16d83b87a896", 00:09:59.154 "is_configured": true, 00:09:59.154 "data_offset": 0, 00:09:59.154 "data_size": 65536 00:09:59.154 }, 00:09:59.154 { 00:09:59.154 "name": null, 00:09:59.154 "uuid": "28414645-9132-4297-9284-5c165b33bd13", 00:09:59.154 "is_configured": false, 00:09:59.154 "data_offset": 0, 00:09:59.154 "data_size": 65536 00:09:59.154 }, 00:09:59.154 { 00:09:59.154 "name": "BaseBdev3", 00:09:59.154 "uuid": "1eb8b18c-34d4-4969-89dd-c25ba7b5afa4", 00:09:59.154 "is_configured": true, 00:09:59.154 "data_offset": 0, 00:09:59.154 "data_size": 65536 00:09:59.154 }, 00:09:59.154 { 00:09:59.154 "name": "BaseBdev4", 00:09:59.154 "uuid": "71dd912f-12c4-4095-8d6e-05dd99d93c6d", 00:09:59.154 "is_configured": true, 00:09:59.154 "data_offset": 0, 00:09:59.154 "data_size": 65536 00:09:59.154 } 00:09:59.154 ] 00:09:59.154 }' 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.154 11:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.722 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.722 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:59.722 11:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.722 11:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.722 11:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.722 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:59.722 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:59.722 11:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.722 11:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.722 [2024-12-06 11:52:57.283861] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:59.722 11:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.722 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:59.722 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.722 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.722 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.722 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.722 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.722 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.722 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.722 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.722 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.722 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.722 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.722 11:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.722 11:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.722 11:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.722 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.722 "name": "Existed_Raid", 00:09:59.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.722 "strip_size_kb": 0, 00:09:59.722 "state": "configuring", 00:09:59.722 "raid_level": "raid1", 00:09:59.722 "superblock": false, 00:09:59.722 "num_base_bdevs": 4, 00:09:59.722 "num_base_bdevs_discovered": 2, 00:09:59.722 "num_base_bdevs_operational": 4, 00:09:59.722 "base_bdevs_list": [ 00:09:59.722 { 00:09:59.722 "name": "BaseBdev1", 00:09:59.722 "uuid": "60ef842b-9af2-4285-b99d-16d83b87a896", 00:09:59.722 "is_configured": true, 00:09:59.722 "data_offset": 0, 00:09:59.722 "data_size": 65536 00:09:59.722 }, 00:09:59.722 { 00:09:59.722 "name": null, 00:09:59.722 "uuid": "28414645-9132-4297-9284-5c165b33bd13", 00:09:59.722 "is_configured": false, 00:09:59.722 "data_offset": 0, 00:09:59.722 "data_size": 65536 00:09:59.722 }, 00:09:59.722 { 00:09:59.722 "name": null, 00:09:59.722 "uuid": "1eb8b18c-34d4-4969-89dd-c25ba7b5afa4", 00:09:59.722 "is_configured": false, 00:09:59.722 "data_offset": 0, 00:09:59.722 "data_size": 65536 00:09:59.722 }, 00:09:59.722 { 00:09:59.722 "name": "BaseBdev4", 00:09:59.722 "uuid": "71dd912f-12c4-4095-8d6e-05dd99d93c6d", 00:09:59.722 "is_configured": true, 00:09:59.722 "data_offset": 0, 00:09:59.722 "data_size": 65536 00:09:59.722 } 00:09:59.722 ] 00:09:59.722 }' 00:09:59.722 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.722 11:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.981 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.981 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:59.981 11:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.981 11:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.981 11:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.981 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:59.981 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:59.981 11:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.981 11:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.981 [2024-12-06 11:52:57.699196] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:59.981 11:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.981 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:59.981 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.981 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.981 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.981 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.981 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.981 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.981 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.981 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.981 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.981 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.981 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.981 11:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.981 11:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.981 11:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.240 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.240 "name": "Existed_Raid", 00:10:00.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.240 "strip_size_kb": 0, 00:10:00.240 "state": "configuring", 00:10:00.240 "raid_level": "raid1", 00:10:00.240 "superblock": false, 00:10:00.240 "num_base_bdevs": 4, 00:10:00.240 "num_base_bdevs_discovered": 3, 00:10:00.240 "num_base_bdevs_operational": 4, 00:10:00.240 "base_bdevs_list": [ 00:10:00.240 { 00:10:00.240 "name": "BaseBdev1", 00:10:00.240 "uuid": "60ef842b-9af2-4285-b99d-16d83b87a896", 00:10:00.240 "is_configured": true, 00:10:00.240 "data_offset": 0, 00:10:00.240 "data_size": 65536 00:10:00.240 }, 00:10:00.240 { 00:10:00.240 "name": null, 00:10:00.240 "uuid": "28414645-9132-4297-9284-5c165b33bd13", 00:10:00.240 "is_configured": false, 00:10:00.240 "data_offset": 0, 00:10:00.240 "data_size": 65536 00:10:00.240 }, 00:10:00.240 { 00:10:00.240 "name": "BaseBdev3", 00:10:00.240 "uuid": "1eb8b18c-34d4-4969-89dd-c25ba7b5afa4", 00:10:00.240 "is_configured": true, 00:10:00.240 "data_offset": 0, 00:10:00.240 "data_size": 65536 00:10:00.240 }, 00:10:00.240 { 00:10:00.240 "name": "BaseBdev4", 00:10:00.240 "uuid": "71dd912f-12c4-4095-8d6e-05dd99d93c6d", 00:10:00.240 "is_configured": true, 00:10:00.240 "data_offset": 0, 00:10:00.240 "data_size": 65536 00:10:00.240 } 00:10:00.240 ] 00:10:00.240 }' 00:10:00.240 11:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.240 11:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.499 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.499 11:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.499 11:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.499 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:00.499 11:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.499 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:00.499 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:00.499 11:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.499 11:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.499 [2024-12-06 11:52:58.178385] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:00.499 11:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.499 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:00.499 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.499 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.499 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.499 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.499 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.499 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.499 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.499 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.499 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.499 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.499 11:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.499 11:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.499 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.499 11:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.499 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.499 "name": "Existed_Raid", 00:10:00.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.499 "strip_size_kb": 0, 00:10:00.499 "state": "configuring", 00:10:00.499 "raid_level": "raid1", 00:10:00.499 "superblock": false, 00:10:00.499 "num_base_bdevs": 4, 00:10:00.499 "num_base_bdevs_discovered": 2, 00:10:00.499 "num_base_bdevs_operational": 4, 00:10:00.499 "base_bdevs_list": [ 00:10:00.499 { 00:10:00.499 "name": null, 00:10:00.499 "uuid": "60ef842b-9af2-4285-b99d-16d83b87a896", 00:10:00.499 "is_configured": false, 00:10:00.499 "data_offset": 0, 00:10:00.499 "data_size": 65536 00:10:00.499 }, 00:10:00.499 { 00:10:00.499 "name": null, 00:10:00.499 "uuid": "28414645-9132-4297-9284-5c165b33bd13", 00:10:00.499 "is_configured": false, 00:10:00.499 "data_offset": 0, 00:10:00.499 "data_size": 65536 00:10:00.499 }, 00:10:00.499 { 00:10:00.499 "name": "BaseBdev3", 00:10:00.499 "uuid": "1eb8b18c-34d4-4969-89dd-c25ba7b5afa4", 00:10:00.499 "is_configured": true, 00:10:00.499 "data_offset": 0, 00:10:00.499 "data_size": 65536 00:10:00.499 }, 00:10:00.499 { 00:10:00.499 "name": "BaseBdev4", 00:10:00.499 "uuid": "71dd912f-12c4-4095-8d6e-05dd99d93c6d", 00:10:00.499 "is_configured": true, 00:10:00.499 "data_offset": 0, 00:10:00.499 "data_size": 65536 00:10:00.499 } 00:10:00.499 ] 00:10:00.499 }' 00:10:00.499 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.499 11:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.068 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.068 11:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.068 11:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.068 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:01.068 11:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.068 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:01.068 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:01.068 11:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.068 11:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.068 [2024-12-06 11:52:58.659900] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:01.068 11:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.068 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:01.068 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.068 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.068 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.068 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.068 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.068 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.068 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.068 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.068 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.068 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.068 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.068 11:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.068 11:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.068 11:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.068 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.068 "name": "Existed_Raid", 00:10:01.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.068 "strip_size_kb": 0, 00:10:01.068 "state": "configuring", 00:10:01.068 "raid_level": "raid1", 00:10:01.068 "superblock": false, 00:10:01.068 "num_base_bdevs": 4, 00:10:01.068 "num_base_bdevs_discovered": 3, 00:10:01.068 "num_base_bdevs_operational": 4, 00:10:01.068 "base_bdevs_list": [ 00:10:01.068 { 00:10:01.068 "name": null, 00:10:01.068 "uuid": "60ef842b-9af2-4285-b99d-16d83b87a896", 00:10:01.068 "is_configured": false, 00:10:01.068 "data_offset": 0, 00:10:01.068 "data_size": 65536 00:10:01.068 }, 00:10:01.068 { 00:10:01.068 "name": "BaseBdev2", 00:10:01.068 "uuid": "28414645-9132-4297-9284-5c165b33bd13", 00:10:01.068 "is_configured": true, 00:10:01.068 "data_offset": 0, 00:10:01.068 "data_size": 65536 00:10:01.068 }, 00:10:01.068 { 00:10:01.068 "name": "BaseBdev3", 00:10:01.068 "uuid": "1eb8b18c-34d4-4969-89dd-c25ba7b5afa4", 00:10:01.068 "is_configured": true, 00:10:01.068 "data_offset": 0, 00:10:01.068 "data_size": 65536 00:10:01.068 }, 00:10:01.068 { 00:10:01.068 "name": "BaseBdev4", 00:10:01.068 "uuid": "71dd912f-12c4-4095-8d6e-05dd99d93c6d", 00:10:01.068 "is_configured": true, 00:10:01.068 "data_offset": 0, 00:10:01.068 "data_size": 65536 00:10:01.068 } 00:10:01.068 ] 00:10:01.068 }' 00:10:01.068 11:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.068 11:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.327 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.327 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:01.327 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.327 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 60ef842b-9af2-4285-b99d-16d83b87a896 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.585 [2024-12-06 11:52:59.181789] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:01.585 [2024-12-06 11:52:59.181830] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:01.585 [2024-12-06 11:52:59.181839] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:01.585 [2024-12-06 11:52:59.182092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:10:01.585 [2024-12-06 11:52:59.182205] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:01.585 [2024-12-06 11:52:59.182214] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:01.585 [2024-12-06 11:52:59.182400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.585 NewBaseBdev 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.585 [ 00:10:01.585 { 00:10:01.585 "name": "NewBaseBdev", 00:10:01.585 "aliases": [ 00:10:01.585 "60ef842b-9af2-4285-b99d-16d83b87a896" 00:10:01.585 ], 00:10:01.585 "product_name": "Malloc disk", 00:10:01.585 "block_size": 512, 00:10:01.585 "num_blocks": 65536, 00:10:01.585 "uuid": "60ef842b-9af2-4285-b99d-16d83b87a896", 00:10:01.585 "assigned_rate_limits": { 00:10:01.585 "rw_ios_per_sec": 0, 00:10:01.585 "rw_mbytes_per_sec": 0, 00:10:01.585 "r_mbytes_per_sec": 0, 00:10:01.585 "w_mbytes_per_sec": 0 00:10:01.585 }, 00:10:01.585 "claimed": true, 00:10:01.585 "claim_type": "exclusive_write", 00:10:01.585 "zoned": false, 00:10:01.585 "supported_io_types": { 00:10:01.585 "read": true, 00:10:01.585 "write": true, 00:10:01.585 "unmap": true, 00:10:01.585 "flush": true, 00:10:01.585 "reset": true, 00:10:01.585 "nvme_admin": false, 00:10:01.585 "nvme_io": false, 00:10:01.585 "nvme_io_md": false, 00:10:01.585 "write_zeroes": true, 00:10:01.585 "zcopy": true, 00:10:01.585 "get_zone_info": false, 00:10:01.585 "zone_management": false, 00:10:01.585 "zone_append": false, 00:10:01.585 "compare": false, 00:10:01.585 "compare_and_write": false, 00:10:01.585 "abort": true, 00:10:01.585 "seek_hole": false, 00:10:01.585 "seek_data": false, 00:10:01.585 "copy": true, 00:10:01.585 "nvme_iov_md": false 00:10:01.585 }, 00:10:01.585 "memory_domains": [ 00:10:01.585 { 00:10:01.585 "dma_device_id": "system", 00:10:01.585 "dma_device_type": 1 00:10:01.585 }, 00:10:01.585 { 00:10:01.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.585 "dma_device_type": 2 00:10:01.585 } 00:10:01.585 ], 00:10:01.585 "driver_specific": {} 00:10:01.585 } 00:10:01.585 ] 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.585 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.585 "name": "Existed_Raid", 00:10:01.585 "uuid": "2a4f1524-52e8-4356-9b03-e78272c0e505", 00:10:01.585 "strip_size_kb": 0, 00:10:01.585 "state": "online", 00:10:01.585 "raid_level": "raid1", 00:10:01.585 "superblock": false, 00:10:01.585 "num_base_bdevs": 4, 00:10:01.585 "num_base_bdevs_discovered": 4, 00:10:01.585 "num_base_bdevs_operational": 4, 00:10:01.585 "base_bdevs_list": [ 00:10:01.585 { 00:10:01.585 "name": "NewBaseBdev", 00:10:01.585 "uuid": "60ef842b-9af2-4285-b99d-16d83b87a896", 00:10:01.585 "is_configured": true, 00:10:01.585 "data_offset": 0, 00:10:01.585 "data_size": 65536 00:10:01.585 }, 00:10:01.585 { 00:10:01.585 "name": "BaseBdev2", 00:10:01.585 "uuid": "28414645-9132-4297-9284-5c165b33bd13", 00:10:01.585 "is_configured": true, 00:10:01.585 "data_offset": 0, 00:10:01.585 "data_size": 65536 00:10:01.585 }, 00:10:01.585 { 00:10:01.585 "name": "BaseBdev3", 00:10:01.585 "uuid": "1eb8b18c-34d4-4969-89dd-c25ba7b5afa4", 00:10:01.586 "is_configured": true, 00:10:01.586 "data_offset": 0, 00:10:01.586 "data_size": 65536 00:10:01.586 }, 00:10:01.586 { 00:10:01.586 "name": "BaseBdev4", 00:10:01.586 "uuid": "71dd912f-12c4-4095-8d6e-05dd99d93c6d", 00:10:01.586 "is_configured": true, 00:10:01.586 "data_offset": 0, 00:10:01.586 "data_size": 65536 00:10:01.586 } 00:10:01.586 ] 00:10:01.586 }' 00:10:01.586 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.586 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.153 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:02.153 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:02.154 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:02.154 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:02.154 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:02.154 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:02.154 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:02.154 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:02.154 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.154 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.154 [2024-12-06 11:52:59.673277] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:02.154 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.154 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:02.154 "name": "Existed_Raid", 00:10:02.154 "aliases": [ 00:10:02.154 "2a4f1524-52e8-4356-9b03-e78272c0e505" 00:10:02.154 ], 00:10:02.154 "product_name": "Raid Volume", 00:10:02.154 "block_size": 512, 00:10:02.154 "num_blocks": 65536, 00:10:02.154 "uuid": "2a4f1524-52e8-4356-9b03-e78272c0e505", 00:10:02.154 "assigned_rate_limits": { 00:10:02.154 "rw_ios_per_sec": 0, 00:10:02.154 "rw_mbytes_per_sec": 0, 00:10:02.154 "r_mbytes_per_sec": 0, 00:10:02.154 "w_mbytes_per_sec": 0 00:10:02.154 }, 00:10:02.154 "claimed": false, 00:10:02.154 "zoned": false, 00:10:02.154 "supported_io_types": { 00:10:02.154 "read": true, 00:10:02.154 "write": true, 00:10:02.154 "unmap": false, 00:10:02.154 "flush": false, 00:10:02.154 "reset": true, 00:10:02.154 "nvme_admin": false, 00:10:02.154 "nvme_io": false, 00:10:02.154 "nvme_io_md": false, 00:10:02.154 "write_zeroes": true, 00:10:02.154 "zcopy": false, 00:10:02.154 "get_zone_info": false, 00:10:02.154 "zone_management": false, 00:10:02.154 "zone_append": false, 00:10:02.154 "compare": false, 00:10:02.154 "compare_and_write": false, 00:10:02.154 "abort": false, 00:10:02.154 "seek_hole": false, 00:10:02.154 "seek_data": false, 00:10:02.154 "copy": false, 00:10:02.154 "nvme_iov_md": false 00:10:02.154 }, 00:10:02.154 "memory_domains": [ 00:10:02.154 { 00:10:02.154 "dma_device_id": "system", 00:10:02.154 "dma_device_type": 1 00:10:02.154 }, 00:10:02.154 { 00:10:02.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.154 "dma_device_type": 2 00:10:02.154 }, 00:10:02.154 { 00:10:02.154 "dma_device_id": "system", 00:10:02.154 "dma_device_type": 1 00:10:02.154 }, 00:10:02.154 { 00:10:02.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.154 "dma_device_type": 2 00:10:02.154 }, 00:10:02.154 { 00:10:02.154 "dma_device_id": "system", 00:10:02.154 "dma_device_type": 1 00:10:02.154 }, 00:10:02.154 { 00:10:02.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.154 "dma_device_type": 2 00:10:02.154 }, 00:10:02.154 { 00:10:02.154 "dma_device_id": "system", 00:10:02.154 "dma_device_type": 1 00:10:02.154 }, 00:10:02.154 { 00:10:02.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.154 "dma_device_type": 2 00:10:02.154 } 00:10:02.154 ], 00:10:02.154 "driver_specific": { 00:10:02.154 "raid": { 00:10:02.154 "uuid": "2a4f1524-52e8-4356-9b03-e78272c0e505", 00:10:02.154 "strip_size_kb": 0, 00:10:02.154 "state": "online", 00:10:02.154 "raid_level": "raid1", 00:10:02.154 "superblock": false, 00:10:02.154 "num_base_bdevs": 4, 00:10:02.154 "num_base_bdevs_discovered": 4, 00:10:02.154 "num_base_bdevs_operational": 4, 00:10:02.154 "base_bdevs_list": [ 00:10:02.154 { 00:10:02.154 "name": "NewBaseBdev", 00:10:02.154 "uuid": "60ef842b-9af2-4285-b99d-16d83b87a896", 00:10:02.154 "is_configured": true, 00:10:02.154 "data_offset": 0, 00:10:02.154 "data_size": 65536 00:10:02.154 }, 00:10:02.154 { 00:10:02.154 "name": "BaseBdev2", 00:10:02.154 "uuid": "28414645-9132-4297-9284-5c165b33bd13", 00:10:02.154 "is_configured": true, 00:10:02.154 "data_offset": 0, 00:10:02.154 "data_size": 65536 00:10:02.154 }, 00:10:02.154 { 00:10:02.154 "name": "BaseBdev3", 00:10:02.154 "uuid": "1eb8b18c-34d4-4969-89dd-c25ba7b5afa4", 00:10:02.154 "is_configured": true, 00:10:02.154 "data_offset": 0, 00:10:02.154 "data_size": 65536 00:10:02.154 }, 00:10:02.154 { 00:10:02.154 "name": "BaseBdev4", 00:10:02.154 "uuid": "71dd912f-12c4-4095-8d6e-05dd99d93c6d", 00:10:02.154 "is_configured": true, 00:10:02.154 "data_offset": 0, 00:10:02.154 "data_size": 65536 00:10:02.154 } 00:10:02.154 ] 00:10:02.154 } 00:10:02.154 } 00:10:02.154 }' 00:10:02.154 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:02.154 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:02.154 BaseBdev2 00:10:02.154 BaseBdev3 00:10:02.154 BaseBdev4' 00:10:02.154 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.154 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:02.154 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.154 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.154 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:02.154 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.154 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.154 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.154 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.154 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.154 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.154 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:02.154 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.154 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.154 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.154 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.413 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.413 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.413 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.413 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:02.413 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.414 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.414 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.414 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.414 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.414 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.414 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.414 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:02.414 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.414 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.414 11:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.414 11:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.414 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.414 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.414 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:02.414 11:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.414 11:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.414 [2024-12-06 11:53:00.016370] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:02.414 [2024-12-06 11:53:00.016396] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:02.414 [2024-12-06 11:53:00.016472] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.414 [2024-12-06 11:53:00.016717] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.414 [2024-12-06 11:53:00.016731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:02.414 11:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.414 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83635 00:10:02.414 11:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 83635 ']' 00:10:02.414 11:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 83635 00:10:02.414 11:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:02.414 11:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:02.414 11:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83635 00:10:02.414 killing process with pid 83635 00:10:02.414 11:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:02.414 11:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:02.414 11:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83635' 00:10:02.414 11:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 83635 00:10:02.414 [2024-12-06 11:53:00.063944] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.414 11:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 83635 00:10:02.414 [2024-12-06 11:53:00.104203] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:02.673 00:10:02.673 real 0m9.373s 00:10:02.673 user 0m16.053s 00:10:02.673 sys 0m1.941s 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:02.673 ************************************ 00:10:02.673 END TEST raid_state_function_test 00:10:02.673 ************************************ 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.673 11:53:00 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:10:02.673 11:53:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:02.673 11:53:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:02.673 11:53:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:02.673 ************************************ 00:10:02.673 START TEST raid_state_function_test_sb 00:10:02.673 ************************************ 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:02.673 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:02.932 Process raid pid: 84284 00:10:02.932 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84284 00:10:02.932 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:02.932 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84284' 00:10:02.932 11:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84284 00:10:02.932 11:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 84284 ']' 00:10:02.932 11:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.932 11:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:02.932 11:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.932 11:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:02.932 11:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.932 [2024-12-06 11:53:00.505071] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:02.932 [2024-12-06 11:53:00.505314] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.932 [2024-12-06 11:53:00.650335] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.191 [2024-12-06 11:53:00.693905] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.191 [2024-12-06 11:53:00.735021] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.191 [2024-12-06 11:53:00.735135] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.760 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:03.760 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:03.760 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:03.760 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.760 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.760 [2024-12-06 11:53:01.315657] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:03.760 [2024-12-06 11:53:01.315760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:03.760 [2024-12-06 11:53:01.315792] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:03.760 [2024-12-06 11:53:01.315815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:03.760 [2024-12-06 11:53:01.315833] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:03.760 [2024-12-06 11:53:01.315855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:03.760 [2024-12-06 11:53:01.315872] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:03.760 [2024-12-06 11:53:01.315893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:03.760 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.760 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:03.760 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.760 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.760 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.761 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.761 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.761 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.761 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.761 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.761 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.761 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.761 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.761 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.761 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.761 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.761 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.761 "name": "Existed_Raid", 00:10:03.761 "uuid": "2a4d7c21-1e4e-48ff-82a2-f9f8a5eaac4b", 00:10:03.761 "strip_size_kb": 0, 00:10:03.761 "state": "configuring", 00:10:03.761 "raid_level": "raid1", 00:10:03.761 "superblock": true, 00:10:03.761 "num_base_bdevs": 4, 00:10:03.761 "num_base_bdevs_discovered": 0, 00:10:03.761 "num_base_bdevs_operational": 4, 00:10:03.761 "base_bdevs_list": [ 00:10:03.761 { 00:10:03.761 "name": "BaseBdev1", 00:10:03.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.761 "is_configured": false, 00:10:03.761 "data_offset": 0, 00:10:03.761 "data_size": 0 00:10:03.761 }, 00:10:03.761 { 00:10:03.761 "name": "BaseBdev2", 00:10:03.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.761 "is_configured": false, 00:10:03.761 "data_offset": 0, 00:10:03.761 "data_size": 0 00:10:03.761 }, 00:10:03.761 { 00:10:03.761 "name": "BaseBdev3", 00:10:03.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.761 "is_configured": false, 00:10:03.761 "data_offset": 0, 00:10:03.761 "data_size": 0 00:10:03.761 }, 00:10:03.761 { 00:10:03.761 "name": "BaseBdev4", 00:10:03.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.761 "is_configured": false, 00:10:03.761 "data_offset": 0, 00:10:03.761 "data_size": 0 00:10:03.761 } 00:10:03.761 ] 00:10:03.761 }' 00:10:03.761 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.761 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.020 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:04.020 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.020 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.020 [2024-12-06 11:53:01.774700] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:04.020 [2024-12-06 11:53:01.774779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.281 [2024-12-06 11:53:01.782705] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:04.281 [2024-12-06 11:53:01.782792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:04.281 [2024-12-06 11:53:01.782818] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:04.281 [2024-12-06 11:53:01.782840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:04.281 [2024-12-06 11:53:01.782857] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:04.281 [2024-12-06 11:53:01.782868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:04.281 [2024-12-06 11:53:01.782874] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:04.281 [2024-12-06 11:53:01.782882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.281 [2024-12-06 11:53:01.799516] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:04.281 BaseBdev1 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.281 [ 00:10:04.281 { 00:10:04.281 "name": "BaseBdev1", 00:10:04.281 "aliases": [ 00:10:04.281 "3139b89d-de94-4384-bdd7-db86332af354" 00:10:04.281 ], 00:10:04.281 "product_name": "Malloc disk", 00:10:04.281 "block_size": 512, 00:10:04.281 "num_blocks": 65536, 00:10:04.281 "uuid": "3139b89d-de94-4384-bdd7-db86332af354", 00:10:04.281 "assigned_rate_limits": { 00:10:04.281 "rw_ios_per_sec": 0, 00:10:04.281 "rw_mbytes_per_sec": 0, 00:10:04.281 "r_mbytes_per_sec": 0, 00:10:04.281 "w_mbytes_per_sec": 0 00:10:04.281 }, 00:10:04.281 "claimed": true, 00:10:04.281 "claim_type": "exclusive_write", 00:10:04.281 "zoned": false, 00:10:04.281 "supported_io_types": { 00:10:04.281 "read": true, 00:10:04.281 "write": true, 00:10:04.281 "unmap": true, 00:10:04.281 "flush": true, 00:10:04.281 "reset": true, 00:10:04.281 "nvme_admin": false, 00:10:04.281 "nvme_io": false, 00:10:04.281 "nvme_io_md": false, 00:10:04.281 "write_zeroes": true, 00:10:04.281 "zcopy": true, 00:10:04.281 "get_zone_info": false, 00:10:04.281 "zone_management": false, 00:10:04.281 "zone_append": false, 00:10:04.281 "compare": false, 00:10:04.281 "compare_and_write": false, 00:10:04.281 "abort": true, 00:10:04.281 "seek_hole": false, 00:10:04.281 "seek_data": false, 00:10:04.281 "copy": true, 00:10:04.281 "nvme_iov_md": false 00:10:04.281 }, 00:10:04.281 "memory_domains": [ 00:10:04.281 { 00:10:04.281 "dma_device_id": "system", 00:10:04.281 "dma_device_type": 1 00:10:04.281 }, 00:10:04.281 { 00:10:04.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.281 "dma_device_type": 2 00:10:04.281 } 00:10:04.281 ], 00:10:04.281 "driver_specific": {} 00:10:04.281 } 00:10:04.281 ] 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.281 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.282 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.282 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.282 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.282 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.282 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.282 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.282 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.282 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.282 "name": "Existed_Raid", 00:10:04.282 "uuid": "4bdc368d-d6a4-48e4-9b4f-2324ce0e421f", 00:10:04.282 "strip_size_kb": 0, 00:10:04.282 "state": "configuring", 00:10:04.282 "raid_level": "raid1", 00:10:04.282 "superblock": true, 00:10:04.282 "num_base_bdevs": 4, 00:10:04.282 "num_base_bdevs_discovered": 1, 00:10:04.282 "num_base_bdevs_operational": 4, 00:10:04.282 "base_bdevs_list": [ 00:10:04.282 { 00:10:04.282 "name": "BaseBdev1", 00:10:04.282 "uuid": "3139b89d-de94-4384-bdd7-db86332af354", 00:10:04.282 "is_configured": true, 00:10:04.282 "data_offset": 2048, 00:10:04.282 "data_size": 63488 00:10:04.282 }, 00:10:04.282 { 00:10:04.282 "name": "BaseBdev2", 00:10:04.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.282 "is_configured": false, 00:10:04.282 "data_offset": 0, 00:10:04.282 "data_size": 0 00:10:04.282 }, 00:10:04.282 { 00:10:04.282 "name": "BaseBdev3", 00:10:04.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.282 "is_configured": false, 00:10:04.282 "data_offset": 0, 00:10:04.282 "data_size": 0 00:10:04.282 }, 00:10:04.282 { 00:10:04.282 "name": "BaseBdev4", 00:10:04.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.282 "is_configured": false, 00:10:04.282 "data_offset": 0, 00:10:04.282 "data_size": 0 00:10:04.282 } 00:10:04.282 ] 00:10:04.282 }' 00:10:04.282 11:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.282 11:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.542 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:04.542 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.542 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.542 [2024-12-06 11:53:02.262759] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:04.542 [2024-12-06 11:53:02.262848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:10:04.542 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.542 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:04.542 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.542 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.542 [2024-12-06 11:53:02.270800] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:04.542 [2024-12-06 11:53:02.272589] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:04.542 [2024-12-06 11:53:02.272676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:04.542 [2024-12-06 11:53:02.272703] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:04.542 [2024-12-06 11:53:02.272724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:04.542 [2024-12-06 11:53:02.272742] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:04.542 [2024-12-06 11:53:02.272761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:04.542 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.542 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:04.542 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:04.542 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:04.542 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.542 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.542 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.542 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.542 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.542 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.542 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.542 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.543 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.543 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.543 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.543 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.543 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.543 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.802 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.802 "name": "Existed_Raid", 00:10:04.802 "uuid": "1ee08a8b-9b51-4096-bf56-39db686d3a42", 00:10:04.802 "strip_size_kb": 0, 00:10:04.802 "state": "configuring", 00:10:04.802 "raid_level": "raid1", 00:10:04.802 "superblock": true, 00:10:04.802 "num_base_bdevs": 4, 00:10:04.802 "num_base_bdevs_discovered": 1, 00:10:04.802 "num_base_bdevs_operational": 4, 00:10:04.802 "base_bdevs_list": [ 00:10:04.802 { 00:10:04.802 "name": "BaseBdev1", 00:10:04.802 "uuid": "3139b89d-de94-4384-bdd7-db86332af354", 00:10:04.802 "is_configured": true, 00:10:04.802 "data_offset": 2048, 00:10:04.802 "data_size": 63488 00:10:04.802 }, 00:10:04.802 { 00:10:04.802 "name": "BaseBdev2", 00:10:04.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.802 "is_configured": false, 00:10:04.802 "data_offset": 0, 00:10:04.803 "data_size": 0 00:10:04.803 }, 00:10:04.803 { 00:10:04.803 "name": "BaseBdev3", 00:10:04.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.803 "is_configured": false, 00:10:04.803 "data_offset": 0, 00:10:04.803 "data_size": 0 00:10:04.803 }, 00:10:04.803 { 00:10:04.803 "name": "BaseBdev4", 00:10:04.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.803 "is_configured": false, 00:10:04.803 "data_offset": 0, 00:10:04.803 "data_size": 0 00:10:04.803 } 00:10:04.803 ] 00:10:04.803 }' 00:10:04.803 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.803 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.062 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:05.062 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.062 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.062 [2024-12-06 11:53:02.671988] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.062 BaseBdev2 00:10:05.062 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.062 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:05.062 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:05.062 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:05.062 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:05.062 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:05.062 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:05.062 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:05.062 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.062 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.063 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.063 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:05.063 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.063 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.063 [ 00:10:05.063 { 00:10:05.063 "name": "BaseBdev2", 00:10:05.063 "aliases": [ 00:10:05.063 "fceb18a2-f6c6-4dda-8284-065ed22e4475" 00:10:05.063 ], 00:10:05.063 "product_name": "Malloc disk", 00:10:05.063 "block_size": 512, 00:10:05.063 "num_blocks": 65536, 00:10:05.063 "uuid": "fceb18a2-f6c6-4dda-8284-065ed22e4475", 00:10:05.063 "assigned_rate_limits": { 00:10:05.063 "rw_ios_per_sec": 0, 00:10:05.063 "rw_mbytes_per_sec": 0, 00:10:05.063 "r_mbytes_per_sec": 0, 00:10:05.063 "w_mbytes_per_sec": 0 00:10:05.063 }, 00:10:05.063 "claimed": true, 00:10:05.063 "claim_type": "exclusive_write", 00:10:05.063 "zoned": false, 00:10:05.063 "supported_io_types": { 00:10:05.063 "read": true, 00:10:05.063 "write": true, 00:10:05.063 "unmap": true, 00:10:05.063 "flush": true, 00:10:05.063 "reset": true, 00:10:05.063 "nvme_admin": false, 00:10:05.063 "nvme_io": false, 00:10:05.063 "nvme_io_md": false, 00:10:05.063 "write_zeroes": true, 00:10:05.063 "zcopy": true, 00:10:05.063 "get_zone_info": false, 00:10:05.063 "zone_management": false, 00:10:05.063 "zone_append": false, 00:10:05.063 "compare": false, 00:10:05.063 "compare_and_write": false, 00:10:05.063 "abort": true, 00:10:05.063 "seek_hole": false, 00:10:05.063 "seek_data": false, 00:10:05.063 "copy": true, 00:10:05.063 "nvme_iov_md": false 00:10:05.063 }, 00:10:05.063 "memory_domains": [ 00:10:05.063 { 00:10:05.063 "dma_device_id": "system", 00:10:05.063 "dma_device_type": 1 00:10:05.063 }, 00:10:05.063 { 00:10:05.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.063 "dma_device_type": 2 00:10:05.063 } 00:10:05.063 ], 00:10:05.063 "driver_specific": {} 00:10:05.063 } 00:10:05.063 ] 00:10:05.063 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.063 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:05.063 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:05.063 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:05.063 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:05.063 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.063 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.063 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.063 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.063 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.063 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.063 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.063 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.063 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.063 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.063 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.063 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.063 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.063 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.063 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.063 "name": "Existed_Raid", 00:10:05.063 "uuid": "1ee08a8b-9b51-4096-bf56-39db686d3a42", 00:10:05.063 "strip_size_kb": 0, 00:10:05.063 "state": "configuring", 00:10:05.063 "raid_level": "raid1", 00:10:05.063 "superblock": true, 00:10:05.063 "num_base_bdevs": 4, 00:10:05.063 "num_base_bdevs_discovered": 2, 00:10:05.063 "num_base_bdevs_operational": 4, 00:10:05.063 "base_bdevs_list": [ 00:10:05.063 { 00:10:05.063 "name": "BaseBdev1", 00:10:05.063 "uuid": "3139b89d-de94-4384-bdd7-db86332af354", 00:10:05.063 "is_configured": true, 00:10:05.063 "data_offset": 2048, 00:10:05.063 "data_size": 63488 00:10:05.063 }, 00:10:05.063 { 00:10:05.063 "name": "BaseBdev2", 00:10:05.063 "uuid": "fceb18a2-f6c6-4dda-8284-065ed22e4475", 00:10:05.063 "is_configured": true, 00:10:05.063 "data_offset": 2048, 00:10:05.063 "data_size": 63488 00:10:05.063 }, 00:10:05.063 { 00:10:05.063 "name": "BaseBdev3", 00:10:05.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.063 "is_configured": false, 00:10:05.063 "data_offset": 0, 00:10:05.063 "data_size": 0 00:10:05.063 }, 00:10:05.063 { 00:10:05.063 "name": "BaseBdev4", 00:10:05.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.063 "is_configured": false, 00:10:05.063 "data_offset": 0, 00:10:05.063 "data_size": 0 00:10:05.063 } 00:10:05.063 ] 00:10:05.063 }' 00:10:05.063 11:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.063 11:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.632 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:05.632 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.632 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.632 [2024-12-06 11:53:03.165927] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:05.632 BaseBdev3 00:10:05.632 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.632 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:05.632 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:05.632 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:05.632 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:05.632 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:05.632 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:05.632 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:05.632 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.632 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.632 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.632 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:05.632 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.632 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.632 [ 00:10:05.632 { 00:10:05.632 "name": "BaseBdev3", 00:10:05.632 "aliases": [ 00:10:05.632 "388a545e-f9c2-4053-aaf5-5921340b4322" 00:10:05.632 ], 00:10:05.632 "product_name": "Malloc disk", 00:10:05.632 "block_size": 512, 00:10:05.632 "num_blocks": 65536, 00:10:05.632 "uuid": "388a545e-f9c2-4053-aaf5-5921340b4322", 00:10:05.632 "assigned_rate_limits": { 00:10:05.632 "rw_ios_per_sec": 0, 00:10:05.632 "rw_mbytes_per_sec": 0, 00:10:05.632 "r_mbytes_per_sec": 0, 00:10:05.632 "w_mbytes_per_sec": 0 00:10:05.632 }, 00:10:05.632 "claimed": true, 00:10:05.632 "claim_type": "exclusive_write", 00:10:05.632 "zoned": false, 00:10:05.632 "supported_io_types": { 00:10:05.632 "read": true, 00:10:05.632 "write": true, 00:10:05.632 "unmap": true, 00:10:05.632 "flush": true, 00:10:05.632 "reset": true, 00:10:05.632 "nvme_admin": false, 00:10:05.632 "nvme_io": false, 00:10:05.632 "nvme_io_md": false, 00:10:05.632 "write_zeroes": true, 00:10:05.632 "zcopy": true, 00:10:05.632 "get_zone_info": false, 00:10:05.632 "zone_management": false, 00:10:05.632 "zone_append": false, 00:10:05.633 "compare": false, 00:10:05.633 "compare_and_write": false, 00:10:05.633 "abort": true, 00:10:05.633 "seek_hole": false, 00:10:05.633 "seek_data": false, 00:10:05.633 "copy": true, 00:10:05.633 "nvme_iov_md": false 00:10:05.633 }, 00:10:05.633 "memory_domains": [ 00:10:05.633 { 00:10:05.633 "dma_device_id": "system", 00:10:05.633 "dma_device_type": 1 00:10:05.633 }, 00:10:05.633 { 00:10:05.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.633 "dma_device_type": 2 00:10:05.633 } 00:10:05.633 ], 00:10:05.633 "driver_specific": {} 00:10:05.633 } 00:10:05.633 ] 00:10:05.633 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.633 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:05.633 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:05.633 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:05.633 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:05.633 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.633 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.633 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.633 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.633 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.633 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.633 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.633 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.633 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.633 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.633 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.633 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.633 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.633 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.633 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.633 "name": "Existed_Raid", 00:10:05.633 "uuid": "1ee08a8b-9b51-4096-bf56-39db686d3a42", 00:10:05.633 "strip_size_kb": 0, 00:10:05.633 "state": "configuring", 00:10:05.633 "raid_level": "raid1", 00:10:05.633 "superblock": true, 00:10:05.633 "num_base_bdevs": 4, 00:10:05.633 "num_base_bdevs_discovered": 3, 00:10:05.633 "num_base_bdevs_operational": 4, 00:10:05.633 "base_bdevs_list": [ 00:10:05.633 { 00:10:05.633 "name": "BaseBdev1", 00:10:05.633 "uuid": "3139b89d-de94-4384-bdd7-db86332af354", 00:10:05.633 "is_configured": true, 00:10:05.633 "data_offset": 2048, 00:10:05.633 "data_size": 63488 00:10:05.633 }, 00:10:05.633 { 00:10:05.633 "name": "BaseBdev2", 00:10:05.633 "uuid": "fceb18a2-f6c6-4dda-8284-065ed22e4475", 00:10:05.633 "is_configured": true, 00:10:05.633 "data_offset": 2048, 00:10:05.633 "data_size": 63488 00:10:05.633 }, 00:10:05.633 { 00:10:05.633 "name": "BaseBdev3", 00:10:05.633 "uuid": "388a545e-f9c2-4053-aaf5-5921340b4322", 00:10:05.633 "is_configured": true, 00:10:05.633 "data_offset": 2048, 00:10:05.633 "data_size": 63488 00:10:05.633 }, 00:10:05.633 { 00:10:05.633 "name": "BaseBdev4", 00:10:05.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.633 "is_configured": false, 00:10:05.633 "data_offset": 0, 00:10:05.633 "data_size": 0 00:10:05.633 } 00:10:05.633 ] 00:10:05.633 }' 00:10:05.633 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.633 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.893 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:05.893 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.893 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.893 [2024-12-06 11:53:03.632158] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:05.893 [2024-12-06 11:53:03.632434] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:05.893 [2024-12-06 11:53:03.632483] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:05.893 [2024-12-06 11:53:03.632766] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:05.893 BaseBdev4 00:10:05.893 [2024-12-06 11:53:03.632953] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:05.893 [2024-12-06 11:53:03.633003] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:05.893 [2024-12-06 11:53:03.633165] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.893 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.893 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:05.893 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:05.893 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:05.893 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:05.893 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:05.893 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:05.893 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:05.893 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.893 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.893 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.893 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:05.893 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.893 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.153 [ 00:10:06.153 { 00:10:06.153 "name": "BaseBdev4", 00:10:06.153 "aliases": [ 00:10:06.153 "e949d88b-491c-49d1-bc85-0b07cef6795d" 00:10:06.153 ], 00:10:06.153 "product_name": "Malloc disk", 00:10:06.153 "block_size": 512, 00:10:06.153 "num_blocks": 65536, 00:10:06.153 "uuid": "e949d88b-491c-49d1-bc85-0b07cef6795d", 00:10:06.153 "assigned_rate_limits": { 00:10:06.153 "rw_ios_per_sec": 0, 00:10:06.153 "rw_mbytes_per_sec": 0, 00:10:06.153 "r_mbytes_per_sec": 0, 00:10:06.153 "w_mbytes_per_sec": 0 00:10:06.153 }, 00:10:06.153 "claimed": true, 00:10:06.153 "claim_type": "exclusive_write", 00:10:06.153 "zoned": false, 00:10:06.153 "supported_io_types": { 00:10:06.153 "read": true, 00:10:06.153 "write": true, 00:10:06.153 "unmap": true, 00:10:06.153 "flush": true, 00:10:06.153 "reset": true, 00:10:06.153 "nvme_admin": false, 00:10:06.153 "nvme_io": false, 00:10:06.153 "nvme_io_md": false, 00:10:06.153 "write_zeroes": true, 00:10:06.153 "zcopy": true, 00:10:06.153 "get_zone_info": false, 00:10:06.153 "zone_management": false, 00:10:06.153 "zone_append": false, 00:10:06.153 "compare": false, 00:10:06.153 "compare_and_write": false, 00:10:06.153 "abort": true, 00:10:06.153 "seek_hole": false, 00:10:06.153 "seek_data": false, 00:10:06.153 "copy": true, 00:10:06.153 "nvme_iov_md": false 00:10:06.153 }, 00:10:06.153 "memory_domains": [ 00:10:06.153 { 00:10:06.153 "dma_device_id": "system", 00:10:06.153 "dma_device_type": 1 00:10:06.153 }, 00:10:06.153 { 00:10:06.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.153 "dma_device_type": 2 00:10:06.153 } 00:10:06.153 ], 00:10:06.153 "driver_specific": {} 00:10:06.153 } 00:10:06.153 ] 00:10:06.153 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.153 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:06.153 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:06.153 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:06.153 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:06.153 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.153 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.153 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.153 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.153 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.153 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.153 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.153 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.153 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.153 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.153 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.153 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.153 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.153 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.153 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.153 "name": "Existed_Raid", 00:10:06.153 "uuid": "1ee08a8b-9b51-4096-bf56-39db686d3a42", 00:10:06.153 "strip_size_kb": 0, 00:10:06.153 "state": "online", 00:10:06.153 "raid_level": "raid1", 00:10:06.153 "superblock": true, 00:10:06.153 "num_base_bdevs": 4, 00:10:06.153 "num_base_bdevs_discovered": 4, 00:10:06.153 "num_base_bdevs_operational": 4, 00:10:06.153 "base_bdevs_list": [ 00:10:06.153 { 00:10:06.153 "name": "BaseBdev1", 00:10:06.153 "uuid": "3139b89d-de94-4384-bdd7-db86332af354", 00:10:06.153 "is_configured": true, 00:10:06.153 "data_offset": 2048, 00:10:06.153 "data_size": 63488 00:10:06.153 }, 00:10:06.153 { 00:10:06.153 "name": "BaseBdev2", 00:10:06.153 "uuid": "fceb18a2-f6c6-4dda-8284-065ed22e4475", 00:10:06.153 "is_configured": true, 00:10:06.153 "data_offset": 2048, 00:10:06.153 "data_size": 63488 00:10:06.153 }, 00:10:06.153 { 00:10:06.153 "name": "BaseBdev3", 00:10:06.153 "uuid": "388a545e-f9c2-4053-aaf5-5921340b4322", 00:10:06.153 "is_configured": true, 00:10:06.153 "data_offset": 2048, 00:10:06.153 "data_size": 63488 00:10:06.153 }, 00:10:06.153 { 00:10:06.153 "name": "BaseBdev4", 00:10:06.153 "uuid": "e949d88b-491c-49d1-bc85-0b07cef6795d", 00:10:06.153 "is_configured": true, 00:10:06.153 "data_offset": 2048, 00:10:06.153 "data_size": 63488 00:10:06.153 } 00:10:06.153 ] 00:10:06.153 }' 00:10:06.153 11:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.153 11:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.413 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:06.413 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:06.413 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:06.413 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:06.413 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:06.413 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:06.413 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:06.413 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:06.413 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.413 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.413 [2024-12-06 11:53:04.031752] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.413 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.413 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:06.413 "name": "Existed_Raid", 00:10:06.413 "aliases": [ 00:10:06.413 "1ee08a8b-9b51-4096-bf56-39db686d3a42" 00:10:06.413 ], 00:10:06.413 "product_name": "Raid Volume", 00:10:06.413 "block_size": 512, 00:10:06.413 "num_blocks": 63488, 00:10:06.413 "uuid": "1ee08a8b-9b51-4096-bf56-39db686d3a42", 00:10:06.413 "assigned_rate_limits": { 00:10:06.413 "rw_ios_per_sec": 0, 00:10:06.413 "rw_mbytes_per_sec": 0, 00:10:06.413 "r_mbytes_per_sec": 0, 00:10:06.413 "w_mbytes_per_sec": 0 00:10:06.413 }, 00:10:06.413 "claimed": false, 00:10:06.413 "zoned": false, 00:10:06.413 "supported_io_types": { 00:10:06.413 "read": true, 00:10:06.413 "write": true, 00:10:06.413 "unmap": false, 00:10:06.413 "flush": false, 00:10:06.413 "reset": true, 00:10:06.413 "nvme_admin": false, 00:10:06.413 "nvme_io": false, 00:10:06.413 "nvme_io_md": false, 00:10:06.413 "write_zeroes": true, 00:10:06.413 "zcopy": false, 00:10:06.413 "get_zone_info": false, 00:10:06.413 "zone_management": false, 00:10:06.413 "zone_append": false, 00:10:06.413 "compare": false, 00:10:06.413 "compare_and_write": false, 00:10:06.413 "abort": false, 00:10:06.413 "seek_hole": false, 00:10:06.413 "seek_data": false, 00:10:06.413 "copy": false, 00:10:06.413 "nvme_iov_md": false 00:10:06.413 }, 00:10:06.413 "memory_domains": [ 00:10:06.413 { 00:10:06.413 "dma_device_id": "system", 00:10:06.413 "dma_device_type": 1 00:10:06.413 }, 00:10:06.413 { 00:10:06.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.413 "dma_device_type": 2 00:10:06.413 }, 00:10:06.413 { 00:10:06.413 "dma_device_id": "system", 00:10:06.413 "dma_device_type": 1 00:10:06.413 }, 00:10:06.413 { 00:10:06.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.413 "dma_device_type": 2 00:10:06.413 }, 00:10:06.413 { 00:10:06.413 "dma_device_id": "system", 00:10:06.413 "dma_device_type": 1 00:10:06.413 }, 00:10:06.413 { 00:10:06.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.413 "dma_device_type": 2 00:10:06.413 }, 00:10:06.413 { 00:10:06.413 "dma_device_id": "system", 00:10:06.413 "dma_device_type": 1 00:10:06.413 }, 00:10:06.413 { 00:10:06.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.413 "dma_device_type": 2 00:10:06.413 } 00:10:06.413 ], 00:10:06.413 "driver_specific": { 00:10:06.413 "raid": { 00:10:06.413 "uuid": "1ee08a8b-9b51-4096-bf56-39db686d3a42", 00:10:06.413 "strip_size_kb": 0, 00:10:06.413 "state": "online", 00:10:06.413 "raid_level": "raid1", 00:10:06.413 "superblock": true, 00:10:06.413 "num_base_bdevs": 4, 00:10:06.413 "num_base_bdevs_discovered": 4, 00:10:06.413 "num_base_bdevs_operational": 4, 00:10:06.413 "base_bdevs_list": [ 00:10:06.413 { 00:10:06.413 "name": "BaseBdev1", 00:10:06.413 "uuid": "3139b89d-de94-4384-bdd7-db86332af354", 00:10:06.413 "is_configured": true, 00:10:06.413 "data_offset": 2048, 00:10:06.413 "data_size": 63488 00:10:06.413 }, 00:10:06.413 { 00:10:06.413 "name": "BaseBdev2", 00:10:06.413 "uuid": "fceb18a2-f6c6-4dda-8284-065ed22e4475", 00:10:06.413 "is_configured": true, 00:10:06.413 "data_offset": 2048, 00:10:06.413 "data_size": 63488 00:10:06.413 }, 00:10:06.413 { 00:10:06.413 "name": "BaseBdev3", 00:10:06.413 "uuid": "388a545e-f9c2-4053-aaf5-5921340b4322", 00:10:06.413 "is_configured": true, 00:10:06.413 "data_offset": 2048, 00:10:06.413 "data_size": 63488 00:10:06.413 }, 00:10:06.413 { 00:10:06.413 "name": "BaseBdev4", 00:10:06.413 "uuid": "e949d88b-491c-49d1-bc85-0b07cef6795d", 00:10:06.413 "is_configured": true, 00:10:06.413 "data_offset": 2048, 00:10:06.413 "data_size": 63488 00:10:06.413 } 00:10:06.413 ] 00:10:06.413 } 00:10:06.413 } 00:10:06.413 }' 00:10:06.413 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:06.413 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:06.413 BaseBdev2 00:10:06.413 BaseBdev3 00:10:06.414 BaseBdev4' 00:10:06.414 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.673 [2024-12-06 11:53:04.378921] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:06.673 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:06.674 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:06.674 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:06.674 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.674 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.674 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.674 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.674 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.674 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.674 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.674 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.674 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.674 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.674 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.674 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.674 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.674 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.933 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.933 "name": "Existed_Raid", 00:10:06.933 "uuid": "1ee08a8b-9b51-4096-bf56-39db686d3a42", 00:10:06.933 "strip_size_kb": 0, 00:10:06.933 "state": "online", 00:10:06.933 "raid_level": "raid1", 00:10:06.933 "superblock": true, 00:10:06.933 "num_base_bdevs": 4, 00:10:06.933 "num_base_bdevs_discovered": 3, 00:10:06.933 "num_base_bdevs_operational": 3, 00:10:06.933 "base_bdevs_list": [ 00:10:06.933 { 00:10:06.933 "name": null, 00:10:06.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.933 "is_configured": false, 00:10:06.933 "data_offset": 0, 00:10:06.933 "data_size": 63488 00:10:06.933 }, 00:10:06.933 { 00:10:06.933 "name": "BaseBdev2", 00:10:06.933 "uuid": "fceb18a2-f6c6-4dda-8284-065ed22e4475", 00:10:06.933 "is_configured": true, 00:10:06.933 "data_offset": 2048, 00:10:06.933 "data_size": 63488 00:10:06.933 }, 00:10:06.933 { 00:10:06.933 "name": "BaseBdev3", 00:10:06.933 "uuid": "388a545e-f9c2-4053-aaf5-5921340b4322", 00:10:06.933 "is_configured": true, 00:10:06.933 "data_offset": 2048, 00:10:06.933 "data_size": 63488 00:10:06.933 }, 00:10:06.933 { 00:10:06.933 "name": "BaseBdev4", 00:10:06.933 "uuid": "e949d88b-491c-49d1-bc85-0b07cef6795d", 00:10:06.933 "is_configured": true, 00:10:06.933 "data_offset": 2048, 00:10:06.933 "data_size": 63488 00:10:06.933 } 00:10:06.933 ] 00:10:06.933 }' 00:10:06.933 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.933 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.192 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:07.192 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:07.192 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.192 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:07.192 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.192 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.192 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.192 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:07.192 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:07.192 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:07.192 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.192 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.192 [2024-12-06 11:53:04.893252] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:07.192 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.192 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:07.192 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:07.192 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:07.192 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.192 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.192 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.192 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.192 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:07.192 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:07.193 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:07.193 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.193 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.193 [2024-12-06 11:53:04.940352] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:07.456 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.456 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:07.456 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:07.456 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.456 11:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:07.456 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.456 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.456 11:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.456 [2024-12-06 11:53:05.007416] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:07.456 [2024-12-06 11:53:05.007556] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.456 [2024-12-06 11:53:05.019033] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.456 [2024-12-06 11:53:05.019151] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:07.456 [2024-12-06 11:53:05.019191] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.456 BaseBdev2 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.456 [ 00:10:07.456 { 00:10:07.456 "name": "BaseBdev2", 00:10:07.456 "aliases": [ 00:10:07.456 "4e39b709-35fb-4fa4-94ef-ea8425d79b4d" 00:10:07.456 ], 00:10:07.456 "product_name": "Malloc disk", 00:10:07.456 "block_size": 512, 00:10:07.456 "num_blocks": 65536, 00:10:07.456 "uuid": "4e39b709-35fb-4fa4-94ef-ea8425d79b4d", 00:10:07.456 "assigned_rate_limits": { 00:10:07.456 "rw_ios_per_sec": 0, 00:10:07.456 "rw_mbytes_per_sec": 0, 00:10:07.456 "r_mbytes_per_sec": 0, 00:10:07.456 "w_mbytes_per_sec": 0 00:10:07.456 }, 00:10:07.456 "claimed": false, 00:10:07.456 "zoned": false, 00:10:07.456 "supported_io_types": { 00:10:07.456 "read": true, 00:10:07.456 "write": true, 00:10:07.456 "unmap": true, 00:10:07.456 "flush": true, 00:10:07.456 "reset": true, 00:10:07.456 "nvme_admin": false, 00:10:07.456 "nvme_io": false, 00:10:07.456 "nvme_io_md": false, 00:10:07.456 "write_zeroes": true, 00:10:07.456 "zcopy": true, 00:10:07.456 "get_zone_info": false, 00:10:07.456 "zone_management": false, 00:10:07.456 "zone_append": false, 00:10:07.456 "compare": false, 00:10:07.456 "compare_and_write": false, 00:10:07.456 "abort": true, 00:10:07.456 "seek_hole": false, 00:10:07.456 "seek_data": false, 00:10:07.456 "copy": true, 00:10:07.456 "nvme_iov_md": false 00:10:07.456 }, 00:10:07.456 "memory_domains": [ 00:10:07.456 { 00:10:07.456 "dma_device_id": "system", 00:10:07.456 "dma_device_type": 1 00:10:07.456 }, 00:10:07.456 { 00:10:07.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.456 "dma_device_type": 2 00:10:07.456 } 00:10:07.456 ], 00:10:07.456 "driver_specific": {} 00:10:07.456 } 00:10:07.456 ] 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:07.456 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.457 BaseBdev3 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.457 [ 00:10:07.457 { 00:10:07.457 "name": "BaseBdev3", 00:10:07.457 "aliases": [ 00:10:07.457 "cdd76e26-c720-4e18-902e-136cbc82acb8" 00:10:07.457 ], 00:10:07.457 "product_name": "Malloc disk", 00:10:07.457 "block_size": 512, 00:10:07.457 "num_blocks": 65536, 00:10:07.457 "uuid": "cdd76e26-c720-4e18-902e-136cbc82acb8", 00:10:07.457 "assigned_rate_limits": { 00:10:07.457 "rw_ios_per_sec": 0, 00:10:07.457 "rw_mbytes_per_sec": 0, 00:10:07.457 "r_mbytes_per_sec": 0, 00:10:07.457 "w_mbytes_per_sec": 0 00:10:07.457 }, 00:10:07.457 "claimed": false, 00:10:07.457 "zoned": false, 00:10:07.457 "supported_io_types": { 00:10:07.457 "read": true, 00:10:07.457 "write": true, 00:10:07.457 "unmap": true, 00:10:07.457 "flush": true, 00:10:07.457 "reset": true, 00:10:07.457 "nvme_admin": false, 00:10:07.457 "nvme_io": false, 00:10:07.457 "nvme_io_md": false, 00:10:07.457 "write_zeroes": true, 00:10:07.457 "zcopy": true, 00:10:07.457 "get_zone_info": false, 00:10:07.457 "zone_management": false, 00:10:07.457 "zone_append": false, 00:10:07.457 "compare": false, 00:10:07.457 "compare_and_write": false, 00:10:07.457 "abort": true, 00:10:07.457 "seek_hole": false, 00:10:07.457 "seek_data": false, 00:10:07.457 "copy": true, 00:10:07.457 "nvme_iov_md": false 00:10:07.457 }, 00:10:07.457 "memory_domains": [ 00:10:07.457 { 00:10:07.457 "dma_device_id": "system", 00:10:07.457 "dma_device_type": 1 00:10:07.457 }, 00:10:07.457 { 00:10:07.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.457 "dma_device_type": 2 00:10:07.457 } 00:10:07.457 ], 00:10:07.457 "driver_specific": {} 00:10:07.457 } 00:10:07.457 ] 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.457 BaseBdev4 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.457 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.457 [ 00:10:07.457 { 00:10:07.457 "name": "BaseBdev4", 00:10:07.457 "aliases": [ 00:10:07.457 "fe2882e9-61a6-47e5-bcd9-ac46cb51f4f7" 00:10:07.457 ], 00:10:07.457 "product_name": "Malloc disk", 00:10:07.457 "block_size": 512, 00:10:07.457 "num_blocks": 65536, 00:10:07.457 "uuid": "fe2882e9-61a6-47e5-bcd9-ac46cb51f4f7", 00:10:07.457 "assigned_rate_limits": { 00:10:07.457 "rw_ios_per_sec": 0, 00:10:07.457 "rw_mbytes_per_sec": 0, 00:10:07.457 "r_mbytes_per_sec": 0, 00:10:07.457 "w_mbytes_per_sec": 0 00:10:07.457 }, 00:10:07.457 "claimed": false, 00:10:07.457 "zoned": false, 00:10:07.457 "supported_io_types": { 00:10:07.457 "read": true, 00:10:07.457 "write": true, 00:10:07.716 "unmap": true, 00:10:07.716 "flush": true, 00:10:07.716 "reset": true, 00:10:07.716 "nvme_admin": false, 00:10:07.716 "nvme_io": false, 00:10:07.716 "nvme_io_md": false, 00:10:07.716 "write_zeroes": true, 00:10:07.716 "zcopy": true, 00:10:07.716 "get_zone_info": false, 00:10:07.716 "zone_management": false, 00:10:07.716 "zone_append": false, 00:10:07.716 "compare": false, 00:10:07.716 "compare_and_write": false, 00:10:07.716 "abort": true, 00:10:07.716 "seek_hole": false, 00:10:07.716 "seek_data": false, 00:10:07.716 "copy": true, 00:10:07.716 "nvme_iov_md": false 00:10:07.716 }, 00:10:07.716 "memory_domains": [ 00:10:07.716 { 00:10:07.716 "dma_device_id": "system", 00:10:07.716 "dma_device_type": 1 00:10:07.716 }, 00:10:07.716 { 00:10:07.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.716 "dma_device_type": 2 00:10:07.716 } 00:10:07.716 ], 00:10:07.716 "driver_specific": {} 00:10:07.716 } 00:10:07.716 ] 00:10:07.716 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.716 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:07.716 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:07.716 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:07.716 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:07.716 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.716 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.716 [2024-12-06 11:53:05.226021] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:07.716 [2024-12-06 11:53:05.226121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:07.716 [2024-12-06 11:53:05.226158] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:07.716 [2024-12-06 11:53:05.227879] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:07.716 [2024-12-06 11:53:05.227962] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:07.716 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.716 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:07.716 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.716 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.716 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.716 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.716 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.716 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.716 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.716 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.716 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.716 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.716 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.716 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.716 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.716 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.716 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.716 "name": "Existed_Raid", 00:10:07.716 "uuid": "ec71c6a3-1fb0-4009-913e-ad536a5a9bcf", 00:10:07.716 "strip_size_kb": 0, 00:10:07.716 "state": "configuring", 00:10:07.716 "raid_level": "raid1", 00:10:07.716 "superblock": true, 00:10:07.716 "num_base_bdevs": 4, 00:10:07.716 "num_base_bdevs_discovered": 3, 00:10:07.716 "num_base_bdevs_operational": 4, 00:10:07.716 "base_bdevs_list": [ 00:10:07.716 { 00:10:07.716 "name": "BaseBdev1", 00:10:07.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.716 "is_configured": false, 00:10:07.716 "data_offset": 0, 00:10:07.716 "data_size": 0 00:10:07.716 }, 00:10:07.716 { 00:10:07.716 "name": "BaseBdev2", 00:10:07.716 "uuid": "4e39b709-35fb-4fa4-94ef-ea8425d79b4d", 00:10:07.716 "is_configured": true, 00:10:07.716 "data_offset": 2048, 00:10:07.716 "data_size": 63488 00:10:07.716 }, 00:10:07.716 { 00:10:07.716 "name": "BaseBdev3", 00:10:07.716 "uuid": "cdd76e26-c720-4e18-902e-136cbc82acb8", 00:10:07.716 "is_configured": true, 00:10:07.716 "data_offset": 2048, 00:10:07.716 "data_size": 63488 00:10:07.716 }, 00:10:07.716 { 00:10:07.716 "name": "BaseBdev4", 00:10:07.716 "uuid": "fe2882e9-61a6-47e5-bcd9-ac46cb51f4f7", 00:10:07.716 "is_configured": true, 00:10:07.716 "data_offset": 2048, 00:10:07.716 "data_size": 63488 00:10:07.716 } 00:10:07.716 ] 00:10:07.716 }' 00:10:07.716 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.716 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.983 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:07.983 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.983 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.983 [2024-12-06 11:53:05.681217] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:07.983 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.983 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:07.983 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.983 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.983 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.983 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.983 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.983 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.983 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.983 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.983 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.983 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.983 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.983 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.983 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.983 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.255 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.255 "name": "Existed_Raid", 00:10:08.255 "uuid": "ec71c6a3-1fb0-4009-913e-ad536a5a9bcf", 00:10:08.255 "strip_size_kb": 0, 00:10:08.255 "state": "configuring", 00:10:08.255 "raid_level": "raid1", 00:10:08.255 "superblock": true, 00:10:08.255 "num_base_bdevs": 4, 00:10:08.255 "num_base_bdevs_discovered": 2, 00:10:08.255 "num_base_bdevs_operational": 4, 00:10:08.255 "base_bdevs_list": [ 00:10:08.255 { 00:10:08.255 "name": "BaseBdev1", 00:10:08.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.255 "is_configured": false, 00:10:08.255 "data_offset": 0, 00:10:08.255 "data_size": 0 00:10:08.255 }, 00:10:08.255 { 00:10:08.255 "name": null, 00:10:08.255 "uuid": "4e39b709-35fb-4fa4-94ef-ea8425d79b4d", 00:10:08.255 "is_configured": false, 00:10:08.255 "data_offset": 0, 00:10:08.255 "data_size": 63488 00:10:08.255 }, 00:10:08.255 { 00:10:08.255 "name": "BaseBdev3", 00:10:08.255 "uuid": "cdd76e26-c720-4e18-902e-136cbc82acb8", 00:10:08.255 "is_configured": true, 00:10:08.255 "data_offset": 2048, 00:10:08.255 "data_size": 63488 00:10:08.255 }, 00:10:08.255 { 00:10:08.255 "name": "BaseBdev4", 00:10:08.255 "uuid": "fe2882e9-61a6-47e5-bcd9-ac46cb51f4f7", 00:10:08.255 "is_configured": true, 00:10:08.255 "data_offset": 2048, 00:10:08.255 "data_size": 63488 00:10:08.255 } 00:10:08.255 ] 00:10:08.255 }' 00:10:08.255 11:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.255 11:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.514 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.514 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.514 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.514 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:08.514 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.514 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:08.514 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:08.514 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.514 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.514 [2024-12-06 11:53:06.123298] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:08.514 BaseBdev1 00:10:08.514 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.514 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:08.514 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:08.514 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:08.514 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:08.514 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:08.514 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:08.514 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:08.514 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.515 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.515 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.515 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:08.515 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.515 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.515 [ 00:10:08.515 { 00:10:08.515 "name": "BaseBdev1", 00:10:08.515 "aliases": [ 00:10:08.515 "fe09ee38-177e-4fe6-a3cb-81ffc41d3d25" 00:10:08.515 ], 00:10:08.515 "product_name": "Malloc disk", 00:10:08.515 "block_size": 512, 00:10:08.515 "num_blocks": 65536, 00:10:08.515 "uuid": "fe09ee38-177e-4fe6-a3cb-81ffc41d3d25", 00:10:08.515 "assigned_rate_limits": { 00:10:08.515 "rw_ios_per_sec": 0, 00:10:08.515 "rw_mbytes_per_sec": 0, 00:10:08.515 "r_mbytes_per_sec": 0, 00:10:08.515 "w_mbytes_per_sec": 0 00:10:08.515 }, 00:10:08.515 "claimed": true, 00:10:08.515 "claim_type": "exclusive_write", 00:10:08.515 "zoned": false, 00:10:08.515 "supported_io_types": { 00:10:08.515 "read": true, 00:10:08.515 "write": true, 00:10:08.515 "unmap": true, 00:10:08.515 "flush": true, 00:10:08.515 "reset": true, 00:10:08.515 "nvme_admin": false, 00:10:08.515 "nvme_io": false, 00:10:08.515 "nvme_io_md": false, 00:10:08.515 "write_zeroes": true, 00:10:08.515 "zcopy": true, 00:10:08.515 "get_zone_info": false, 00:10:08.515 "zone_management": false, 00:10:08.515 "zone_append": false, 00:10:08.515 "compare": false, 00:10:08.515 "compare_and_write": false, 00:10:08.515 "abort": true, 00:10:08.515 "seek_hole": false, 00:10:08.515 "seek_data": false, 00:10:08.515 "copy": true, 00:10:08.515 "nvme_iov_md": false 00:10:08.515 }, 00:10:08.515 "memory_domains": [ 00:10:08.515 { 00:10:08.515 "dma_device_id": "system", 00:10:08.515 "dma_device_type": 1 00:10:08.515 }, 00:10:08.515 { 00:10:08.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.515 "dma_device_type": 2 00:10:08.515 } 00:10:08.515 ], 00:10:08.515 "driver_specific": {} 00:10:08.515 } 00:10:08.515 ] 00:10:08.515 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.515 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:08.515 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:08.515 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.515 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.515 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.515 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.515 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.515 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.515 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.515 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.515 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.515 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.515 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.515 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.515 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.515 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.515 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.515 "name": "Existed_Raid", 00:10:08.515 "uuid": "ec71c6a3-1fb0-4009-913e-ad536a5a9bcf", 00:10:08.515 "strip_size_kb": 0, 00:10:08.515 "state": "configuring", 00:10:08.515 "raid_level": "raid1", 00:10:08.515 "superblock": true, 00:10:08.515 "num_base_bdevs": 4, 00:10:08.515 "num_base_bdevs_discovered": 3, 00:10:08.515 "num_base_bdevs_operational": 4, 00:10:08.515 "base_bdevs_list": [ 00:10:08.515 { 00:10:08.515 "name": "BaseBdev1", 00:10:08.515 "uuid": "fe09ee38-177e-4fe6-a3cb-81ffc41d3d25", 00:10:08.515 "is_configured": true, 00:10:08.515 "data_offset": 2048, 00:10:08.515 "data_size": 63488 00:10:08.515 }, 00:10:08.515 { 00:10:08.515 "name": null, 00:10:08.515 "uuid": "4e39b709-35fb-4fa4-94ef-ea8425d79b4d", 00:10:08.515 "is_configured": false, 00:10:08.515 "data_offset": 0, 00:10:08.515 "data_size": 63488 00:10:08.515 }, 00:10:08.515 { 00:10:08.515 "name": "BaseBdev3", 00:10:08.515 "uuid": "cdd76e26-c720-4e18-902e-136cbc82acb8", 00:10:08.515 "is_configured": true, 00:10:08.515 "data_offset": 2048, 00:10:08.515 "data_size": 63488 00:10:08.515 }, 00:10:08.515 { 00:10:08.515 "name": "BaseBdev4", 00:10:08.515 "uuid": "fe2882e9-61a6-47e5-bcd9-ac46cb51f4f7", 00:10:08.515 "is_configured": true, 00:10:08.515 "data_offset": 2048, 00:10:08.515 "data_size": 63488 00:10:08.515 } 00:10:08.515 ] 00:10:08.515 }' 00:10:08.515 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.515 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.084 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.084 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.084 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.084 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:09.084 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.084 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:09.084 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:09.084 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.084 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.084 [2024-12-06 11:53:06.590511] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:09.084 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.084 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:09.084 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.084 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.084 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.084 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.084 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.084 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.084 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.084 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.084 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.084 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.084 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.084 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.084 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.084 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.084 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.084 "name": "Existed_Raid", 00:10:09.084 "uuid": "ec71c6a3-1fb0-4009-913e-ad536a5a9bcf", 00:10:09.084 "strip_size_kb": 0, 00:10:09.084 "state": "configuring", 00:10:09.084 "raid_level": "raid1", 00:10:09.084 "superblock": true, 00:10:09.084 "num_base_bdevs": 4, 00:10:09.084 "num_base_bdevs_discovered": 2, 00:10:09.084 "num_base_bdevs_operational": 4, 00:10:09.084 "base_bdevs_list": [ 00:10:09.084 { 00:10:09.084 "name": "BaseBdev1", 00:10:09.084 "uuid": "fe09ee38-177e-4fe6-a3cb-81ffc41d3d25", 00:10:09.084 "is_configured": true, 00:10:09.084 "data_offset": 2048, 00:10:09.084 "data_size": 63488 00:10:09.084 }, 00:10:09.084 { 00:10:09.084 "name": null, 00:10:09.084 "uuid": "4e39b709-35fb-4fa4-94ef-ea8425d79b4d", 00:10:09.084 "is_configured": false, 00:10:09.084 "data_offset": 0, 00:10:09.084 "data_size": 63488 00:10:09.084 }, 00:10:09.084 { 00:10:09.084 "name": null, 00:10:09.084 "uuid": "cdd76e26-c720-4e18-902e-136cbc82acb8", 00:10:09.084 "is_configured": false, 00:10:09.084 "data_offset": 0, 00:10:09.084 "data_size": 63488 00:10:09.084 }, 00:10:09.084 { 00:10:09.084 "name": "BaseBdev4", 00:10:09.084 "uuid": "fe2882e9-61a6-47e5-bcd9-ac46cb51f4f7", 00:10:09.084 "is_configured": true, 00:10:09.084 "data_offset": 2048, 00:10:09.084 "data_size": 63488 00:10:09.084 } 00:10:09.084 ] 00:10:09.084 }' 00:10:09.084 11:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.084 11:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.344 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:09.344 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.344 11:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.344 11:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.344 11:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.344 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:09.344 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:09.344 11:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.344 11:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.344 [2024-12-06 11:53:07.029784] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:09.344 11:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.344 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:09.344 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.344 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.344 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.344 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.344 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.344 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.344 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.344 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.344 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.344 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.344 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.344 11:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.344 11:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.344 11:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.344 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.344 "name": "Existed_Raid", 00:10:09.344 "uuid": "ec71c6a3-1fb0-4009-913e-ad536a5a9bcf", 00:10:09.344 "strip_size_kb": 0, 00:10:09.344 "state": "configuring", 00:10:09.344 "raid_level": "raid1", 00:10:09.344 "superblock": true, 00:10:09.344 "num_base_bdevs": 4, 00:10:09.344 "num_base_bdevs_discovered": 3, 00:10:09.344 "num_base_bdevs_operational": 4, 00:10:09.344 "base_bdevs_list": [ 00:10:09.344 { 00:10:09.344 "name": "BaseBdev1", 00:10:09.344 "uuid": "fe09ee38-177e-4fe6-a3cb-81ffc41d3d25", 00:10:09.344 "is_configured": true, 00:10:09.344 "data_offset": 2048, 00:10:09.344 "data_size": 63488 00:10:09.344 }, 00:10:09.344 { 00:10:09.344 "name": null, 00:10:09.344 "uuid": "4e39b709-35fb-4fa4-94ef-ea8425d79b4d", 00:10:09.344 "is_configured": false, 00:10:09.344 "data_offset": 0, 00:10:09.344 "data_size": 63488 00:10:09.344 }, 00:10:09.344 { 00:10:09.344 "name": "BaseBdev3", 00:10:09.344 "uuid": "cdd76e26-c720-4e18-902e-136cbc82acb8", 00:10:09.344 "is_configured": true, 00:10:09.344 "data_offset": 2048, 00:10:09.344 "data_size": 63488 00:10:09.344 }, 00:10:09.344 { 00:10:09.344 "name": "BaseBdev4", 00:10:09.344 "uuid": "fe2882e9-61a6-47e5-bcd9-ac46cb51f4f7", 00:10:09.344 "is_configured": true, 00:10:09.344 "data_offset": 2048, 00:10:09.344 "data_size": 63488 00:10:09.344 } 00:10:09.344 ] 00:10:09.344 }' 00:10:09.344 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.344 11:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.913 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.913 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:09.914 11:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.914 11:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.914 11:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.914 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:09.914 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:09.914 11:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.914 11:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.914 [2024-12-06 11:53:07.488984] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:09.914 11:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.914 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:09.914 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.914 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.914 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.914 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.914 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.914 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.914 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.914 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.914 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.914 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.914 11:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.914 11:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.914 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.914 11:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.914 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.914 "name": "Existed_Raid", 00:10:09.914 "uuid": "ec71c6a3-1fb0-4009-913e-ad536a5a9bcf", 00:10:09.914 "strip_size_kb": 0, 00:10:09.914 "state": "configuring", 00:10:09.914 "raid_level": "raid1", 00:10:09.914 "superblock": true, 00:10:09.914 "num_base_bdevs": 4, 00:10:09.914 "num_base_bdevs_discovered": 2, 00:10:09.914 "num_base_bdevs_operational": 4, 00:10:09.914 "base_bdevs_list": [ 00:10:09.914 { 00:10:09.914 "name": null, 00:10:09.914 "uuid": "fe09ee38-177e-4fe6-a3cb-81ffc41d3d25", 00:10:09.914 "is_configured": false, 00:10:09.914 "data_offset": 0, 00:10:09.914 "data_size": 63488 00:10:09.914 }, 00:10:09.914 { 00:10:09.914 "name": null, 00:10:09.914 "uuid": "4e39b709-35fb-4fa4-94ef-ea8425d79b4d", 00:10:09.914 "is_configured": false, 00:10:09.914 "data_offset": 0, 00:10:09.914 "data_size": 63488 00:10:09.914 }, 00:10:09.914 { 00:10:09.914 "name": "BaseBdev3", 00:10:09.914 "uuid": "cdd76e26-c720-4e18-902e-136cbc82acb8", 00:10:09.914 "is_configured": true, 00:10:09.914 "data_offset": 2048, 00:10:09.914 "data_size": 63488 00:10:09.914 }, 00:10:09.914 { 00:10:09.914 "name": "BaseBdev4", 00:10:09.914 "uuid": "fe2882e9-61a6-47e5-bcd9-ac46cb51f4f7", 00:10:09.914 "is_configured": true, 00:10:09.914 "data_offset": 2048, 00:10:09.914 "data_size": 63488 00:10:09.914 } 00:10:09.914 ] 00:10:09.914 }' 00:10:09.914 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.914 11:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.483 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.483 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:10.483 11:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.483 11:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.483 11:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.483 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:10.483 11:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:10.483 11:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.483 11:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.483 [2024-12-06 11:53:08.006443] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:10.483 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.483 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:10.483 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.483 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.483 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.483 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.483 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.483 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.483 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.483 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.483 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.483 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.483 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.483 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.483 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.483 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.483 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.483 "name": "Existed_Raid", 00:10:10.483 "uuid": "ec71c6a3-1fb0-4009-913e-ad536a5a9bcf", 00:10:10.483 "strip_size_kb": 0, 00:10:10.483 "state": "configuring", 00:10:10.483 "raid_level": "raid1", 00:10:10.483 "superblock": true, 00:10:10.483 "num_base_bdevs": 4, 00:10:10.483 "num_base_bdevs_discovered": 3, 00:10:10.483 "num_base_bdevs_operational": 4, 00:10:10.483 "base_bdevs_list": [ 00:10:10.483 { 00:10:10.483 "name": null, 00:10:10.483 "uuid": "fe09ee38-177e-4fe6-a3cb-81ffc41d3d25", 00:10:10.483 "is_configured": false, 00:10:10.483 "data_offset": 0, 00:10:10.484 "data_size": 63488 00:10:10.484 }, 00:10:10.484 { 00:10:10.484 "name": "BaseBdev2", 00:10:10.484 "uuid": "4e39b709-35fb-4fa4-94ef-ea8425d79b4d", 00:10:10.484 "is_configured": true, 00:10:10.484 "data_offset": 2048, 00:10:10.484 "data_size": 63488 00:10:10.484 }, 00:10:10.484 { 00:10:10.484 "name": "BaseBdev3", 00:10:10.484 "uuid": "cdd76e26-c720-4e18-902e-136cbc82acb8", 00:10:10.484 "is_configured": true, 00:10:10.484 "data_offset": 2048, 00:10:10.484 "data_size": 63488 00:10:10.484 }, 00:10:10.484 { 00:10:10.484 "name": "BaseBdev4", 00:10:10.484 "uuid": "fe2882e9-61a6-47e5-bcd9-ac46cb51f4f7", 00:10:10.484 "is_configured": true, 00:10:10.484 "data_offset": 2048, 00:10:10.484 "data_size": 63488 00:10:10.484 } 00:10:10.484 ] 00:10:10.484 }' 00:10:10.484 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.484 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.743 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:10.743 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.743 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.743 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.743 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.743 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:10.743 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:10.743 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.743 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.743 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fe09ee38-177e-4fe6-a3cb-81ffc41d3d25 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.002 [2024-12-06 11:53:08.524354] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:11.002 [2024-12-06 11:53:08.524594] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:11.002 [2024-12-06 11:53:08.524616] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:11.002 [2024-12-06 11:53:08.524877] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:10:11.002 [2024-12-06 11:53:08.524999] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:11.002 [2024-12-06 11:53:08.525008] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:11.002 NewBaseBdev 00:10:11.002 [2024-12-06 11:53:08.525103] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.002 [ 00:10:11.002 { 00:10:11.002 "name": "NewBaseBdev", 00:10:11.002 "aliases": [ 00:10:11.002 "fe09ee38-177e-4fe6-a3cb-81ffc41d3d25" 00:10:11.002 ], 00:10:11.002 "product_name": "Malloc disk", 00:10:11.002 "block_size": 512, 00:10:11.002 "num_blocks": 65536, 00:10:11.002 "uuid": "fe09ee38-177e-4fe6-a3cb-81ffc41d3d25", 00:10:11.002 "assigned_rate_limits": { 00:10:11.002 "rw_ios_per_sec": 0, 00:10:11.002 "rw_mbytes_per_sec": 0, 00:10:11.002 "r_mbytes_per_sec": 0, 00:10:11.002 "w_mbytes_per_sec": 0 00:10:11.002 }, 00:10:11.002 "claimed": true, 00:10:11.002 "claim_type": "exclusive_write", 00:10:11.002 "zoned": false, 00:10:11.002 "supported_io_types": { 00:10:11.002 "read": true, 00:10:11.002 "write": true, 00:10:11.002 "unmap": true, 00:10:11.002 "flush": true, 00:10:11.002 "reset": true, 00:10:11.002 "nvme_admin": false, 00:10:11.002 "nvme_io": false, 00:10:11.002 "nvme_io_md": false, 00:10:11.002 "write_zeroes": true, 00:10:11.002 "zcopy": true, 00:10:11.002 "get_zone_info": false, 00:10:11.002 "zone_management": false, 00:10:11.002 "zone_append": false, 00:10:11.002 "compare": false, 00:10:11.002 "compare_and_write": false, 00:10:11.002 "abort": true, 00:10:11.002 "seek_hole": false, 00:10:11.002 "seek_data": false, 00:10:11.002 "copy": true, 00:10:11.002 "nvme_iov_md": false 00:10:11.002 }, 00:10:11.002 "memory_domains": [ 00:10:11.002 { 00:10:11.002 "dma_device_id": "system", 00:10:11.002 "dma_device_type": 1 00:10:11.002 }, 00:10:11.002 { 00:10:11.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.002 "dma_device_type": 2 00:10:11.002 } 00:10:11.002 ], 00:10:11.002 "driver_specific": {} 00:10:11.002 } 00:10:11.002 ] 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.002 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.002 "name": "Existed_Raid", 00:10:11.002 "uuid": "ec71c6a3-1fb0-4009-913e-ad536a5a9bcf", 00:10:11.002 "strip_size_kb": 0, 00:10:11.002 "state": "online", 00:10:11.002 "raid_level": "raid1", 00:10:11.002 "superblock": true, 00:10:11.002 "num_base_bdevs": 4, 00:10:11.002 "num_base_bdevs_discovered": 4, 00:10:11.002 "num_base_bdevs_operational": 4, 00:10:11.002 "base_bdevs_list": [ 00:10:11.002 { 00:10:11.002 "name": "NewBaseBdev", 00:10:11.002 "uuid": "fe09ee38-177e-4fe6-a3cb-81ffc41d3d25", 00:10:11.003 "is_configured": true, 00:10:11.003 "data_offset": 2048, 00:10:11.003 "data_size": 63488 00:10:11.003 }, 00:10:11.003 { 00:10:11.003 "name": "BaseBdev2", 00:10:11.003 "uuid": "4e39b709-35fb-4fa4-94ef-ea8425d79b4d", 00:10:11.003 "is_configured": true, 00:10:11.003 "data_offset": 2048, 00:10:11.003 "data_size": 63488 00:10:11.003 }, 00:10:11.003 { 00:10:11.003 "name": "BaseBdev3", 00:10:11.003 "uuid": "cdd76e26-c720-4e18-902e-136cbc82acb8", 00:10:11.003 "is_configured": true, 00:10:11.003 "data_offset": 2048, 00:10:11.003 "data_size": 63488 00:10:11.003 }, 00:10:11.003 { 00:10:11.003 "name": "BaseBdev4", 00:10:11.003 "uuid": "fe2882e9-61a6-47e5-bcd9-ac46cb51f4f7", 00:10:11.003 "is_configured": true, 00:10:11.003 "data_offset": 2048, 00:10:11.003 "data_size": 63488 00:10:11.003 } 00:10:11.003 ] 00:10:11.003 }' 00:10:11.003 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.003 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.261 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:11.261 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:11.261 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:11.261 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:11.261 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:11.261 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:11.261 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:11.261 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.261 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.261 11:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:11.261 [2024-12-06 11:53:08.987867] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.261 11:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:11.539 "name": "Existed_Raid", 00:10:11.539 "aliases": [ 00:10:11.539 "ec71c6a3-1fb0-4009-913e-ad536a5a9bcf" 00:10:11.539 ], 00:10:11.539 "product_name": "Raid Volume", 00:10:11.539 "block_size": 512, 00:10:11.539 "num_blocks": 63488, 00:10:11.539 "uuid": "ec71c6a3-1fb0-4009-913e-ad536a5a9bcf", 00:10:11.539 "assigned_rate_limits": { 00:10:11.539 "rw_ios_per_sec": 0, 00:10:11.539 "rw_mbytes_per_sec": 0, 00:10:11.539 "r_mbytes_per_sec": 0, 00:10:11.539 "w_mbytes_per_sec": 0 00:10:11.539 }, 00:10:11.539 "claimed": false, 00:10:11.539 "zoned": false, 00:10:11.539 "supported_io_types": { 00:10:11.539 "read": true, 00:10:11.539 "write": true, 00:10:11.539 "unmap": false, 00:10:11.539 "flush": false, 00:10:11.539 "reset": true, 00:10:11.539 "nvme_admin": false, 00:10:11.539 "nvme_io": false, 00:10:11.539 "nvme_io_md": false, 00:10:11.539 "write_zeroes": true, 00:10:11.539 "zcopy": false, 00:10:11.539 "get_zone_info": false, 00:10:11.539 "zone_management": false, 00:10:11.539 "zone_append": false, 00:10:11.539 "compare": false, 00:10:11.539 "compare_and_write": false, 00:10:11.539 "abort": false, 00:10:11.539 "seek_hole": false, 00:10:11.539 "seek_data": false, 00:10:11.539 "copy": false, 00:10:11.539 "nvme_iov_md": false 00:10:11.539 }, 00:10:11.539 "memory_domains": [ 00:10:11.539 { 00:10:11.539 "dma_device_id": "system", 00:10:11.539 "dma_device_type": 1 00:10:11.539 }, 00:10:11.539 { 00:10:11.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.539 "dma_device_type": 2 00:10:11.539 }, 00:10:11.539 { 00:10:11.539 "dma_device_id": "system", 00:10:11.539 "dma_device_type": 1 00:10:11.539 }, 00:10:11.539 { 00:10:11.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.539 "dma_device_type": 2 00:10:11.539 }, 00:10:11.539 { 00:10:11.539 "dma_device_id": "system", 00:10:11.539 "dma_device_type": 1 00:10:11.539 }, 00:10:11.539 { 00:10:11.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.539 "dma_device_type": 2 00:10:11.539 }, 00:10:11.539 { 00:10:11.539 "dma_device_id": "system", 00:10:11.539 "dma_device_type": 1 00:10:11.539 }, 00:10:11.539 { 00:10:11.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.539 "dma_device_type": 2 00:10:11.539 } 00:10:11.539 ], 00:10:11.539 "driver_specific": { 00:10:11.539 "raid": { 00:10:11.539 "uuid": "ec71c6a3-1fb0-4009-913e-ad536a5a9bcf", 00:10:11.539 "strip_size_kb": 0, 00:10:11.539 "state": "online", 00:10:11.539 "raid_level": "raid1", 00:10:11.539 "superblock": true, 00:10:11.539 "num_base_bdevs": 4, 00:10:11.539 "num_base_bdevs_discovered": 4, 00:10:11.539 "num_base_bdevs_operational": 4, 00:10:11.539 "base_bdevs_list": [ 00:10:11.539 { 00:10:11.539 "name": "NewBaseBdev", 00:10:11.539 "uuid": "fe09ee38-177e-4fe6-a3cb-81ffc41d3d25", 00:10:11.539 "is_configured": true, 00:10:11.539 "data_offset": 2048, 00:10:11.539 "data_size": 63488 00:10:11.539 }, 00:10:11.539 { 00:10:11.539 "name": "BaseBdev2", 00:10:11.539 "uuid": "4e39b709-35fb-4fa4-94ef-ea8425d79b4d", 00:10:11.539 "is_configured": true, 00:10:11.539 "data_offset": 2048, 00:10:11.539 "data_size": 63488 00:10:11.539 }, 00:10:11.539 { 00:10:11.539 "name": "BaseBdev3", 00:10:11.539 "uuid": "cdd76e26-c720-4e18-902e-136cbc82acb8", 00:10:11.539 "is_configured": true, 00:10:11.539 "data_offset": 2048, 00:10:11.539 "data_size": 63488 00:10:11.539 }, 00:10:11.539 { 00:10:11.539 "name": "BaseBdev4", 00:10:11.539 "uuid": "fe2882e9-61a6-47e5-bcd9-ac46cb51f4f7", 00:10:11.539 "is_configured": true, 00:10:11.539 "data_offset": 2048, 00:10:11.539 "data_size": 63488 00:10:11.539 } 00:10:11.539 ] 00:10:11.539 } 00:10:11.539 } 00:10:11.539 }' 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:11.539 BaseBdev2 00:10:11.539 BaseBdev3 00:10:11.539 BaseBdev4' 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.539 11:53:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.798 11:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.798 11:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.798 11:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:11.798 11:53:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.798 11:53:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.798 [2024-12-06 11:53:09.307171] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:11.798 [2024-12-06 11:53:09.307240] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.798 [2024-12-06 11:53:09.307362] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.798 [2024-12-06 11:53:09.307625] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.798 [2024-12-06 11:53:09.307681] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:11.798 11:53:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.798 11:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84284 00:10:11.798 11:53:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 84284 ']' 00:10:11.798 11:53:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 84284 00:10:11.798 11:53:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:11.798 11:53:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:11.798 11:53:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84284 00:10:11.798 11:53:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:11.798 11:53:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:11.798 11:53:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84284' 00:10:11.798 killing process with pid 84284 00:10:11.798 11:53:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 84284 00:10:11.798 [2024-12-06 11:53:09.349972] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:11.798 11:53:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 84284 00:10:11.798 [2024-12-06 11:53:09.390655] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:12.057 11:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:12.057 00:10:12.057 real 0m9.218s 00:10:12.057 user 0m15.743s 00:10:12.057 sys 0m1.933s 00:10:12.057 11:53:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:12.057 ************************************ 00:10:12.057 END TEST raid_state_function_test_sb 00:10:12.057 ************************************ 00:10:12.057 11:53:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.057 11:53:09 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:10:12.057 11:53:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:12.057 11:53:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:12.057 11:53:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:12.057 ************************************ 00:10:12.057 START TEST raid_superblock_test 00:10:12.057 ************************************ 00:10:12.057 11:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:10:12.057 11:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:12.057 11:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:12.057 11:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:12.057 11:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:12.057 11:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:12.057 11:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:12.057 11:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:12.057 11:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:12.057 11:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:12.057 11:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:12.057 11:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:12.057 11:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:12.057 11:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:12.057 11:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:12.057 11:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:12.057 11:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84927 00:10:12.058 11:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:12.058 11:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84927 00:10:12.058 11:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 84927 ']' 00:10:12.058 11:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.058 11:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:12.058 11:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.058 11:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:12.058 11:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.058 [2024-12-06 11:53:09.791286] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:12.058 [2024-12-06 11:53:09.791499] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84927 ] 00:10:12.315 [2024-12-06 11:53:09.934199] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.315 [2024-12-06 11:53:09.977722] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.315 [2024-12-06 11:53:10.019004] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.315 [2024-12-06 11:53:10.019122] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.883 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:12.883 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:12.883 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:12.883 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:12.883 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:12.883 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:12.883 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:12.883 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:12.883 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:12.883 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:12.883 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:12.883 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.883 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.883 malloc1 00:10:12.883 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.883 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:12.883 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.883 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.883 [2024-12-06 11:53:10.632534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:12.883 [2024-12-06 11:53:10.632650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.883 [2024-12-06 11:53:10.632675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:10:12.883 [2024-12-06 11:53:10.632689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.883 [2024-12-06 11:53:10.634729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.883 [2024-12-06 11:53:10.634769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:13.143 pt1 00:10:13.143 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.143 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:13.143 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:13.143 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:13.143 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:13.143 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:13.143 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:13.143 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:13.143 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:13.143 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.144 malloc2 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.144 [2024-12-06 11:53:10.675288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:13.144 [2024-12-06 11:53:10.675525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.144 [2024-12-06 11:53:10.675651] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:13.144 [2024-12-06 11:53:10.675746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.144 [2024-12-06 11:53:10.679856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.144 [2024-12-06 11:53:10.679979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:13.144 pt2 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.144 malloc3 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.144 [2024-12-06 11:53:10.708941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:13.144 [2024-12-06 11:53:10.709044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.144 [2024-12-06 11:53:10.709079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:13.144 [2024-12-06 11:53:10.709107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.144 [2024-12-06 11:53:10.711027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.144 [2024-12-06 11:53:10.711110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:13.144 pt3 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.144 malloc4 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.144 [2024-12-06 11:53:10.741285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:13.144 [2024-12-06 11:53:10.741330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.144 [2024-12-06 11:53:10.741344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:13.144 [2024-12-06 11:53:10.741356] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.144 [2024-12-06 11:53:10.743380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.144 [2024-12-06 11:53:10.743459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:13.144 pt4 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.144 [2024-12-06 11:53:10.753302] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:13.144 [2024-12-06 11:53:10.755019] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:13.144 [2024-12-06 11:53:10.755087] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:13.144 [2024-12-06 11:53:10.755128] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:13.144 [2024-12-06 11:53:10.755319] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:10:13.144 [2024-12-06 11:53:10.755339] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:13.144 [2024-12-06 11:53:10.755581] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:13.144 [2024-12-06 11:53:10.755734] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:10:13.144 [2024-12-06 11:53:10.755743] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:10:13.144 [2024-12-06 11:53:10.755862] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.144 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.144 "name": "raid_bdev1", 00:10:13.144 "uuid": "596b7646-2732-4205-a347-92b55aa16af3", 00:10:13.144 "strip_size_kb": 0, 00:10:13.144 "state": "online", 00:10:13.144 "raid_level": "raid1", 00:10:13.144 "superblock": true, 00:10:13.144 "num_base_bdevs": 4, 00:10:13.144 "num_base_bdevs_discovered": 4, 00:10:13.144 "num_base_bdevs_operational": 4, 00:10:13.144 "base_bdevs_list": [ 00:10:13.144 { 00:10:13.144 "name": "pt1", 00:10:13.145 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:13.145 "is_configured": true, 00:10:13.145 "data_offset": 2048, 00:10:13.145 "data_size": 63488 00:10:13.145 }, 00:10:13.145 { 00:10:13.145 "name": "pt2", 00:10:13.145 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.145 "is_configured": true, 00:10:13.145 "data_offset": 2048, 00:10:13.145 "data_size": 63488 00:10:13.145 }, 00:10:13.145 { 00:10:13.145 "name": "pt3", 00:10:13.145 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:13.145 "is_configured": true, 00:10:13.145 "data_offset": 2048, 00:10:13.145 "data_size": 63488 00:10:13.145 }, 00:10:13.145 { 00:10:13.145 "name": "pt4", 00:10:13.145 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:13.145 "is_configured": true, 00:10:13.145 "data_offset": 2048, 00:10:13.145 "data_size": 63488 00:10:13.145 } 00:10:13.145 ] 00:10:13.145 }' 00:10:13.145 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.145 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.713 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:13.713 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:13.713 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:13.713 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:13.713 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:13.713 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:13.713 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:13.713 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:13.713 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.713 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.713 [2024-12-06 11:53:11.200796] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.713 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.713 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:13.713 "name": "raid_bdev1", 00:10:13.713 "aliases": [ 00:10:13.713 "596b7646-2732-4205-a347-92b55aa16af3" 00:10:13.713 ], 00:10:13.713 "product_name": "Raid Volume", 00:10:13.713 "block_size": 512, 00:10:13.713 "num_blocks": 63488, 00:10:13.713 "uuid": "596b7646-2732-4205-a347-92b55aa16af3", 00:10:13.713 "assigned_rate_limits": { 00:10:13.713 "rw_ios_per_sec": 0, 00:10:13.713 "rw_mbytes_per_sec": 0, 00:10:13.713 "r_mbytes_per_sec": 0, 00:10:13.713 "w_mbytes_per_sec": 0 00:10:13.713 }, 00:10:13.713 "claimed": false, 00:10:13.713 "zoned": false, 00:10:13.713 "supported_io_types": { 00:10:13.713 "read": true, 00:10:13.713 "write": true, 00:10:13.713 "unmap": false, 00:10:13.713 "flush": false, 00:10:13.713 "reset": true, 00:10:13.713 "nvme_admin": false, 00:10:13.713 "nvme_io": false, 00:10:13.713 "nvme_io_md": false, 00:10:13.713 "write_zeroes": true, 00:10:13.713 "zcopy": false, 00:10:13.713 "get_zone_info": false, 00:10:13.713 "zone_management": false, 00:10:13.713 "zone_append": false, 00:10:13.713 "compare": false, 00:10:13.713 "compare_and_write": false, 00:10:13.713 "abort": false, 00:10:13.713 "seek_hole": false, 00:10:13.713 "seek_data": false, 00:10:13.713 "copy": false, 00:10:13.714 "nvme_iov_md": false 00:10:13.714 }, 00:10:13.714 "memory_domains": [ 00:10:13.714 { 00:10:13.714 "dma_device_id": "system", 00:10:13.714 "dma_device_type": 1 00:10:13.714 }, 00:10:13.714 { 00:10:13.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.714 "dma_device_type": 2 00:10:13.714 }, 00:10:13.714 { 00:10:13.714 "dma_device_id": "system", 00:10:13.714 "dma_device_type": 1 00:10:13.714 }, 00:10:13.714 { 00:10:13.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.714 "dma_device_type": 2 00:10:13.714 }, 00:10:13.714 { 00:10:13.714 "dma_device_id": "system", 00:10:13.714 "dma_device_type": 1 00:10:13.714 }, 00:10:13.714 { 00:10:13.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.714 "dma_device_type": 2 00:10:13.714 }, 00:10:13.714 { 00:10:13.714 "dma_device_id": "system", 00:10:13.714 "dma_device_type": 1 00:10:13.714 }, 00:10:13.714 { 00:10:13.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.714 "dma_device_type": 2 00:10:13.714 } 00:10:13.714 ], 00:10:13.714 "driver_specific": { 00:10:13.714 "raid": { 00:10:13.714 "uuid": "596b7646-2732-4205-a347-92b55aa16af3", 00:10:13.714 "strip_size_kb": 0, 00:10:13.714 "state": "online", 00:10:13.714 "raid_level": "raid1", 00:10:13.714 "superblock": true, 00:10:13.714 "num_base_bdevs": 4, 00:10:13.714 "num_base_bdevs_discovered": 4, 00:10:13.714 "num_base_bdevs_operational": 4, 00:10:13.714 "base_bdevs_list": [ 00:10:13.714 { 00:10:13.714 "name": "pt1", 00:10:13.714 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:13.714 "is_configured": true, 00:10:13.714 "data_offset": 2048, 00:10:13.714 "data_size": 63488 00:10:13.714 }, 00:10:13.714 { 00:10:13.714 "name": "pt2", 00:10:13.714 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.714 "is_configured": true, 00:10:13.714 "data_offset": 2048, 00:10:13.714 "data_size": 63488 00:10:13.714 }, 00:10:13.714 { 00:10:13.714 "name": "pt3", 00:10:13.714 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:13.714 "is_configured": true, 00:10:13.714 "data_offset": 2048, 00:10:13.714 "data_size": 63488 00:10:13.714 }, 00:10:13.714 { 00:10:13.714 "name": "pt4", 00:10:13.714 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:13.714 "is_configured": true, 00:10:13.714 "data_offset": 2048, 00:10:13.714 "data_size": 63488 00:10:13.714 } 00:10:13.714 ] 00:10:13.714 } 00:10:13.714 } 00:10:13.714 }' 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:13.714 pt2 00:10:13.714 pt3 00:10:13.714 pt4' 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.714 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.974 [2024-12-06 11:53:11.528178] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=596b7646-2732-4205-a347-92b55aa16af3 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 596b7646-2732-4205-a347-92b55aa16af3 ']' 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.974 [2024-12-06 11:53:11.571849] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:13.974 [2024-12-06 11:53:11.571916] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:13.974 [2024-12-06 11:53:11.571999] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.974 [2024-12-06 11:53:11.572109] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:13.974 [2024-12-06 11:53:11.572157] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:13.974 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:13.975 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:13.975 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:13.975 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:13.975 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.975 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.235 [2024-12-06 11:53:11.731585] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:14.235 [2024-12-06 11:53:11.733368] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:14.235 [2024-12-06 11:53:11.733411] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:14.235 [2024-12-06 11:53:11.733445] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:14.235 [2024-12-06 11:53:11.733490] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:14.235 [2024-12-06 11:53:11.733529] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:14.235 [2024-12-06 11:53:11.733547] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:14.235 [2024-12-06 11:53:11.733562] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:14.235 [2024-12-06 11:53:11.733575] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:14.235 [2024-12-06 11:53:11.733584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:10:14.235 request: 00:10:14.235 { 00:10:14.235 "name": "raid_bdev1", 00:10:14.235 "raid_level": "raid1", 00:10:14.235 "base_bdevs": [ 00:10:14.235 "malloc1", 00:10:14.235 "malloc2", 00:10:14.235 "malloc3", 00:10:14.235 "malloc4" 00:10:14.235 ], 00:10:14.235 "superblock": false, 00:10:14.235 "method": "bdev_raid_create", 00:10:14.235 "req_id": 1 00:10:14.235 } 00:10:14.235 Got JSON-RPC error response 00:10:14.235 response: 00:10:14.235 { 00:10:14.235 "code": -17, 00:10:14.235 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:14.235 } 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.235 [2024-12-06 11:53:11.795436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:14.235 [2024-12-06 11:53:11.795520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.235 [2024-12-06 11:53:11.795554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:14.235 [2024-12-06 11:53:11.795579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.235 [2024-12-06 11:53:11.797645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.235 [2024-12-06 11:53:11.797710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:14.235 [2024-12-06 11:53:11.797789] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:14.235 [2024-12-06 11:53:11.797841] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:14.235 pt1 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.235 "name": "raid_bdev1", 00:10:14.235 "uuid": "596b7646-2732-4205-a347-92b55aa16af3", 00:10:14.235 "strip_size_kb": 0, 00:10:14.235 "state": "configuring", 00:10:14.235 "raid_level": "raid1", 00:10:14.235 "superblock": true, 00:10:14.235 "num_base_bdevs": 4, 00:10:14.235 "num_base_bdevs_discovered": 1, 00:10:14.235 "num_base_bdevs_operational": 4, 00:10:14.235 "base_bdevs_list": [ 00:10:14.235 { 00:10:14.235 "name": "pt1", 00:10:14.235 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:14.235 "is_configured": true, 00:10:14.235 "data_offset": 2048, 00:10:14.235 "data_size": 63488 00:10:14.235 }, 00:10:14.235 { 00:10:14.235 "name": null, 00:10:14.235 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:14.235 "is_configured": false, 00:10:14.235 "data_offset": 2048, 00:10:14.235 "data_size": 63488 00:10:14.235 }, 00:10:14.235 { 00:10:14.235 "name": null, 00:10:14.235 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:14.235 "is_configured": false, 00:10:14.235 "data_offset": 2048, 00:10:14.235 "data_size": 63488 00:10:14.235 }, 00:10:14.235 { 00:10:14.235 "name": null, 00:10:14.235 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:14.235 "is_configured": false, 00:10:14.235 "data_offset": 2048, 00:10:14.235 "data_size": 63488 00:10:14.235 } 00:10:14.235 ] 00:10:14.235 }' 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.235 11:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.804 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:14.804 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:14.804 11:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.804 11:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.804 [2024-12-06 11:53:12.270747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:14.804 [2024-12-06 11:53:12.270799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.804 [2024-12-06 11:53:12.270818] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:14.804 [2024-12-06 11:53:12.270827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.804 [2024-12-06 11:53:12.271162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.804 [2024-12-06 11:53:12.271178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:14.804 [2024-12-06 11:53:12.271247] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:14.804 [2024-12-06 11:53:12.271276] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:14.804 pt2 00:10:14.804 11:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.804 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:14.804 11:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.804 11:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.804 [2024-12-06 11:53:12.282745] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:14.804 11:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.804 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:14.804 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:14.804 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.804 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.804 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.804 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.804 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.804 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.804 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.804 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.804 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.804 11:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.804 11:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.804 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.804 11:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.804 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.804 "name": "raid_bdev1", 00:10:14.804 "uuid": "596b7646-2732-4205-a347-92b55aa16af3", 00:10:14.804 "strip_size_kb": 0, 00:10:14.804 "state": "configuring", 00:10:14.804 "raid_level": "raid1", 00:10:14.804 "superblock": true, 00:10:14.804 "num_base_bdevs": 4, 00:10:14.804 "num_base_bdevs_discovered": 1, 00:10:14.804 "num_base_bdevs_operational": 4, 00:10:14.804 "base_bdevs_list": [ 00:10:14.804 { 00:10:14.804 "name": "pt1", 00:10:14.804 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:14.804 "is_configured": true, 00:10:14.804 "data_offset": 2048, 00:10:14.804 "data_size": 63488 00:10:14.804 }, 00:10:14.804 { 00:10:14.804 "name": null, 00:10:14.804 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:14.804 "is_configured": false, 00:10:14.804 "data_offset": 0, 00:10:14.804 "data_size": 63488 00:10:14.804 }, 00:10:14.804 { 00:10:14.804 "name": null, 00:10:14.804 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:14.804 "is_configured": false, 00:10:14.804 "data_offset": 2048, 00:10:14.804 "data_size": 63488 00:10:14.804 }, 00:10:14.804 { 00:10:14.804 "name": null, 00:10:14.804 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:14.804 "is_configured": false, 00:10:14.804 "data_offset": 2048, 00:10:14.804 "data_size": 63488 00:10:14.804 } 00:10:14.804 ] 00:10:14.804 }' 00:10:14.804 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.804 11:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.066 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:15.066 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:15.066 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:15.066 11:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.066 11:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.066 [2024-12-06 11:53:12.745921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:15.066 [2024-12-06 11:53:12.746038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.066 [2024-12-06 11:53:12.746058] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:15.066 [2024-12-06 11:53:12.746067] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.066 [2024-12-06 11:53:12.746424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.066 [2024-12-06 11:53:12.746443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:15.066 [2024-12-06 11:53:12.746499] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:15.066 [2024-12-06 11:53:12.746519] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:15.066 pt2 00:10:15.066 11:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.066 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:15.066 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:15.066 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:15.066 11:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.066 11:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.066 [2024-12-06 11:53:12.757882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:15.066 [2024-12-06 11:53:12.757927] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.066 [2024-12-06 11:53:12.757948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:15.066 [2024-12-06 11:53:12.757958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.066 [2024-12-06 11:53:12.758266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.066 [2024-12-06 11:53:12.758284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:15.066 [2024-12-06 11:53:12.758334] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:15.066 [2024-12-06 11:53:12.758354] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:15.066 pt3 00:10:15.066 11:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.066 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:15.066 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:15.066 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:15.066 11:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.066 11:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.066 [2024-12-06 11:53:12.769881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:15.066 [2024-12-06 11:53:12.769930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.066 [2024-12-06 11:53:12.769945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:15.066 [2024-12-06 11:53:12.769954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.066 [2024-12-06 11:53:12.770227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.066 [2024-12-06 11:53:12.770259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:15.066 [2024-12-06 11:53:12.770308] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:15.066 [2024-12-06 11:53:12.770326] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:15.067 [2024-12-06 11:53:12.770420] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:15.067 [2024-12-06 11:53:12.770431] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:15.067 [2024-12-06 11:53:12.770660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:15.067 [2024-12-06 11:53:12.770795] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:15.067 [2024-12-06 11:53:12.770805] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:10:15.067 [2024-12-06 11:53:12.770901] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.067 pt4 00:10:15.067 11:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.067 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:15.067 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:15.067 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:15.067 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:15.067 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.067 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.067 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.067 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.067 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.067 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.067 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.067 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.067 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.067 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.067 11:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.067 11:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.067 11:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.327 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.327 "name": "raid_bdev1", 00:10:15.327 "uuid": "596b7646-2732-4205-a347-92b55aa16af3", 00:10:15.327 "strip_size_kb": 0, 00:10:15.327 "state": "online", 00:10:15.327 "raid_level": "raid1", 00:10:15.327 "superblock": true, 00:10:15.327 "num_base_bdevs": 4, 00:10:15.327 "num_base_bdevs_discovered": 4, 00:10:15.327 "num_base_bdevs_operational": 4, 00:10:15.327 "base_bdevs_list": [ 00:10:15.327 { 00:10:15.327 "name": "pt1", 00:10:15.327 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:15.327 "is_configured": true, 00:10:15.327 "data_offset": 2048, 00:10:15.327 "data_size": 63488 00:10:15.327 }, 00:10:15.327 { 00:10:15.327 "name": "pt2", 00:10:15.327 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:15.327 "is_configured": true, 00:10:15.327 "data_offset": 2048, 00:10:15.327 "data_size": 63488 00:10:15.327 }, 00:10:15.327 { 00:10:15.327 "name": "pt3", 00:10:15.327 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:15.327 "is_configured": true, 00:10:15.327 "data_offset": 2048, 00:10:15.327 "data_size": 63488 00:10:15.327 }, 00:10:15.327 { 00:10:15.327 "name": "pt4", 00:10:15.327 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:15.327 "is_configured": true, 00:10:15.327 "data_offset": 2048, 00:10:15.327 "data_size": 63488 00:10:15.327 } 00:10:15.327 ] 00:10:15.327 }' 00:10:15.327 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.327 11:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.586 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:15.586 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:15.586 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:15.586 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:15.586 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:15.586 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:15.586 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:15.586 11:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.586 11:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.586 [2024-12-06 11:53:13.201448] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.586 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:15.586 11:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.586 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:15.586 "name": "raid_bdev1", 00:10:15.586 "aliases": [ 00:10:15.586 "596b7646-2732-4205-a347-92b55aa16af3" 00:10:15.586 ], 00:10:15.586 "product_name": "Raid Volume", 00:10:15.586 "block_size": 512, 00:10:15.586 "num_blocks": 63488, 00:10:15.586 "uuid": "596b7646-2732-4205-a347-92b55aa16af3", 00:10:15.586 "assigned_rate_limits": { 00:10:15.586 "rw_ios_per_sec": 0, 00:10:15.586 "rw_mbytes_per_sec": 0, 00:10:15.586 "r_mbytes_per_sec": 0, 00:10:15.586 "w_mbytes_per_sec": 0 00:10:15.586 }, 00:10:15.586 "claimed": false, 00:10:15.586 "zoned": false, 00:10:15.586 "supported_io_types": { 00:10:15.586 "read": true, 00:10:15.586 "write": true, 00:10:15.586 "unmap": false, 00:10:15.586 "flush": false, 00:10:15.586 "reset": true, 00:10:15.586 "nvme_admin": false, 00:10:15.586 "nvme_io": false, 00:10:15.586 "nvme_io_md": false, 00:10:15.586 "write_zeroes": true, 00:10:15.586 "zcopy": false, 00:10:15.586 "get_zone_info": false, 00:10:15.586 "zone_management": false, 00:10:15.586 "zone_append": false, 00:10:15.586 "compare": false, 00:10:15.586 "compare_and_write": false, 00:10:15.586 "abort": false, 00:10:15.586 "seek_hole": false, 00:10:15.586 "seek_data": false, 00:10:15.586 "copy": false, 00:10:15.586 "nvme_iov_md": false 00:10:15.586 }, 00:10:15.586 "memory_domains": [ 00:10:15.586 { 00:10:15.586 "dma_device_id": "system", 00:10:15.586 "dma_device_type": 1 00:10:15.586 }, 00:10:15.586 { 00:10:15.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.586 "dma_device_type": 2 00:10:15.586 }, 00:10:15.586 { 00:10:15.586 "dma_device_id": "system", 00:10:15.586 "dma_device_type": 1 00:10:15.586 }, 00:10:15.586 { 00:10:15.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.586 "dma_device_type": 2 00:10:15.586 }, 00:10:15.586 { 00:10:15.586 "dma_device_id": "system", 00:10:15.586 "dma_device_type": 1 00:10:15.586 }, 00:10:15.586 { 00:10:15.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.586 "dma_device_type": 2 00:10:15.586 }, 00:10:15.586 { 00:10:15.586 "dma_device_id": "system", 00:10:15.586 "dma_device_type": 1 00:10:15.586 }, 00:10:15.586 { 00:10:15.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.586 "dma_device_type": 2 00:10:15.586 } 00:10:15.586 ], 00:10:15.586 "driver_specific": { 00:10:15.586 "raid": { 00:10:15.586 "uuid": "596b7646-2732-4205-a347-92b55aa16af3", 00:10:15.586 "strip_size_kb": 0, 00:10:15.586 "state": "online", 00:10:15.586 "raid_level": "raid1", 00:10:15.586 "superblock": true, 00:10:15.586 "num_base_bdevs": 4, 00:10:15.586 "num_base_bdevs_discovered": 4, 00:10:15.586 "num_base_bdevs_operational": 4, 00:10:15.586 "base_bdevs_list": [ 00:10:15.586 { 00:10:15.586 "name": "pt1", 00:10:15.586 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:15.586 "is_configured": true, 00:10:15.586 "data_offset": 2048, 00:10:15.586 "data_size": 63488 00:10:15.586 }, 00:10:15.586 { 00:10:15.586 "name": "pt2", 00:10:15.587 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:15.587 "is_configured": true, 00:10:15.587 "data_offset": 2048, 00:10:15.587 "data_size": 63488 00:10:15.587 }, 00:10:15.587 { 00:10:15.587 "name": "pt3", 00:10:15.587 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:15.587 "is_configured": true, 00:10:15.587 "data_offset": 2048, 00:10:15.587 "data_size": 63488 00:10:15.587 }, 00:10:15.587 { 00:10:15.587 "name": "pt4", 00:10:15.587 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:15.587 "is_configured": true, 00:10:15.587 "data_offset": 2048, 00:10:15.587 "data_size": 63488 00:10:15.587 } 00:10:15.587 ] 00:10:15.587 } 00:10:15.587 } 00:10:15.587 }' 00:10:15.587 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:15.587 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:15.587 pt2 00:10:15.587 pt3 00:10:15.587 pt4' 00:10:15.587 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.846 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:15.846 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.846 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:15.846 11:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.846 11:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.846 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.846 11:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.846 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.846 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.846 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.846 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:15.846 11:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.846 11:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.846 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.846 11:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.846 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.846 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.846 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.846 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.846 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:15.846 11:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.846 11:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.847 [2024-12-06 11:53:13.544823] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 596b7646-2732-4205-a347-92b55aa16af3 '!=' 596b7646-2732-4205-a347-92b55aa16af3 ']' 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.847 [2024-12-06 11:53:13.580531] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.847 11:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.106 11:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.106 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.106 "name": "raid_bdev1", 00:10:16.106 "uuid": "596b7646-2732-4205-a347-92b55aa16af3", 00:10:16.106 "strip_size_kb": 0, 00:10:16.106 "state": "online", 00:10:16.106 "raid_level": "raid1", 00:10:16.106 "superblock": true, 00:10:16.106 "num_base_bdevs": 4, 00:10:16.106 "num_base_bdevs_discovered": 3, 00:10:16.106 "num_base_bdevs_operational": 3, 00:10:16.106 "base_bdevs_list": [ 00:10:16.106 { 00:10:16.106 "name": null, 00:10:16.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.106 "is_configured": false, 00:10:16.106 "data_offset": 0, 00:10:16.106 "data_size": 63488 00:10:16.106 }, 00:10:16.106 { 00:10:16.106 "name": "pt2", 00:10:16.106 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.106 "is_configured": true, 00:10:16.106 "data_offset": 2048, 00:10:16.106 "data_size": 63488 00:10:16.106 }, 00:10:16.106 { 00:10:16.106 "name": "pt3", 00:10:16.106 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:16.106 "is_configured": true, 00:10:16.106 "data_offset": 2048, 00:10:16.106 "data_size": 63488 00:10:16.106 }, 00:10:16.106 { 00:10:16.106 "name": "pt4", 00:10:16.106 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:16.106 "is_configured": true, 00:10:16.106 "data_offset": 2048, 00:10:16.106 "data_size": 63488 00:10:16.106 } 00:10:16.106 ] 00:10:16.106 }' 00:10:16.106 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.106 11:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.366 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:16.366 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.366 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.366 [2024-12-06 11:53:14.043696] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:16.366 [2024-12-06 11:53:14.043764] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:16.366 [2024-12-06 11:53:14.043866] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:16.366 [2024-12-06 11:53:14.043946] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:16.366 [2024-12-06 11:53:14.043990] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:10:16.366 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.366 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.366 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.366 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.366 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:16.366 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.366 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:16.366 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:16.366 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:16.366 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:16.366 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:16.366 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.366 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.366 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.366 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:16.366 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:16.366 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:16.366 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.366 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.366 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.366 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:16.366 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:16.366 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:10:16.366 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.366 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.626 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.627 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:16.627 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:16.627 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:16.627 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:16.627 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:16.627 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.627 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.627 [2024-12-06 11:53:14.139516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:16.627 [2024-12-06 11:53:14.139564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.627 [2024-12-06 11:53:14.139588] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:16.627 [2024-12-06 11:53:14.139598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.627 [2024-12-06 11:53:14.141633] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.627 [2024-12-06 11:53:14.141671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:16.627 [2024-12-06 11:53:14.141732] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:16.627 [2024-12-06 11:53:14.141765] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:16.627 pt2 00:10:16.627 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.627 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:16.627 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.627 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.627 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.627 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.627 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.627 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.627 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.627 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.627 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.627 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.627 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.627 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.627 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.627 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.627 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.627 "name": "raid_bdev1", 00:10:16.627 "uuid": "596b7646-2732-4205-a347-92b55aa16af3", 00:10:16.627 "strip_size_kb": 0, 00:10:16.627 "state": "configuring", 00:10:16.627 "raid_level": "raid1", 00:10:16.627 "superblock": true, 00:10:16.627 "num_base_bdevs": 4, 00:10:16.627 "num_base_bdevs_discovered": 1, 00:10:16.627 "num_base_bdevs_operational": 3, 00:10:16.627 "base_bdevs_list": [ 00:10:16.627 { 00:10:16.627 "name": null, 00:10:16.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.627 "is_configured": false, 00:10:16.627 "data_offset": 2048, 00:10:16.627 "data_size": 63488 00:10:16.627 }, 00:10:16.627 { 00:10:16.627 "name": "pt2", 00:10:16.627 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.627 "is_configured": true, 00:10:16.627 "data_offset": 2048, 00:10:16.627 "data_size": 63488 00:10:16.627 }, 00:10:16.627 { 00:10:16.627 "name": null, 00:10:16.627 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:16.627 "is_configured": false, 00:10:16.627 "data_offset": 2048, 00:10:16.627 "data_size": 63488 00:10:16.627 }, 00:10:16.627 { 00:10:16.627 "name": null, 00:10:16.627 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:16.627 "is_configured": false, 00:10:16.627 "data_offset": 2048, 00:10:16.627 "data_size": 63488 00:10:16.627 } 00:10:16.627 ] 00:10:16.627 }' 00:10:16.627 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.627 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.887 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:16.887 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:16.887 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:16.887 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.887 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.887 [2024-12-06 11:53:14.558830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:16.887 [2024-12-06 11:53:14.558954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.887 [2024-12-06 11:53:14.558991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:16.887 [2024-12-06 11:53:14.559023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.887 [2024-12-06 11:53:14.559449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.887 [2024-12-06 11:53:14.559514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:16.887 [2024-12-06 11:53:14.559611] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:16.887 [2024-12-06 11:53:14.559670] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:16.887 pt3 00:10:16.887 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.887 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:16.887 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.887 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.887 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.887 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.887 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.887 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.887 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.887 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.887 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.887 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.887 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.887 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.887 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.887 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.887 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.887 "name": "raid_bdev1", 00:10:16.887 "uuid": "596b7646-2732-4205-a347-92b55aa16af3", 00:10:16.887 "strip_size_kb": 0, 00:10:16.887 "state": "configuring", 00:10:16.887 "raid_level": "raid1", 00:10:16.887 "superblock": true, 00:10:16.887 "num_base_bdevs": 4, 00:10:16.887 "num_base_bdevs_discovered": 2, 00:10:16.887 "num_base_bdevs_operational": 3, 00:10:16.887 "base_bdevs_list": [ 00:10:16.887 { 00:10:16.887 "name": null, 00:10:16.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.887 "is_configured": false, 00:10:16.887 "data_offset": 2048, 00:10:16.887 "data_size": 63488 00:10:16.887 }, 00:10:16.887 { 00:10:16.887 "name": "pt2", 00:10:16.887 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.887 "is_configured": true, 00:10:16.887 "data_offset": 2048, 00:10:16.887 "data_size": 63488 00:10:16.887 }, 00:10:16.887 { 00:10:16.887 "name": "pt3", 00:10:16.887 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:16.887 "is_configured": true, 00:10:16.887 "data_offset": 2048, 00:10:16.887 "data_size": 63488 00:10:16.887 }, 00:10:16.887 { 00:10:16.887 "name": null, 00:10:16.887 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:16.887 "is_configured": false, 00:10:16.887 "data_offset": 2048, 00:10:16.887 "data_size": 63488 00:10:16.887 } 00:10:16.887 ] 00:10:16.887 }' 00:10:16.887 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.887 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.456 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:17.456 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:17.456 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:10:17.456 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:17.456 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.456 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.456 [2024-12-06 11:53:14.986076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:17.456 [2024-12-06 11:53:14.986135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.456 [2024-12-06 11:53:14.986154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:17.456 [2024-12-06 11:53:14.986164] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.456 [2024-12-06 11:53:14.986553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.456 [2024-12-06 11:53:14.986572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:17.456 [2024-12-06 11:53:14.986639] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:17.456 [2024-12-06 11:53:14.986661] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:17.456 [2024-12-06 11:53:14.986754] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:17.456 [2024-12-06 11:53:14.986764] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:17.456 [2024-12-06 11:53:14.986983] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:10:17.456 [2024-12-06 11:53:14.987121] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:17.456 [2024-12-06 11:53:14.987129] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:10:17.456 [2024-12-06 11:53:14.987232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.456 pt4 00:10:17.456 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.456 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:17.456 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.456 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.456 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.456 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.456 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.456 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.456 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.456 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.456 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.456 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.456 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.456 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.456 11:53:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.456 11:53:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.457 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.457 "name": "raid_bdev1", 00:10:17.457 "uuid": "596b7646-2732-4205-a347-92b55aa16af3", 00:10:17.457 "strip_size_kb": 0, 00:10:17.457 "state": "online", 00:10:17.457 "raid_level": "raid1", 00:10:17.457 "superblock": true, 00:10:17.457 "num_base_bdevs": 4, 00:10:17.457 "num_base_bdevs_discovered": 3, 00:10:17.457 "num_base_bdevs_operational": 3, 00:10:17.457 "base_bdevs_list": [ 00:10:17.457 { 00:10:17.457 "name": null, 00:10:17.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.457 "is_configured": false, 00:10:17.457 "data_offset": 2048, 00:10:17.457 "data_size": 63488 00:10:17.457 }, 00:10:17.457 { 00:10:17.457 "name": "pt2", 00:10:17.457 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.457 "is_configured": true, 00:10:17.457 "data_offset": 2048, 00:10:17.457 "data_size": 63488 00:10:17.457 }, 00:10:17.457 { 00:10:17.457 "name": "pt3", 00:10:17.457 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.457 "is_configured": true, 00:10:17.457 "data_offset": 2048, 00:10:17.457 "data_size": 63488 00:10:17.457 }, 00:10:17.457 { 00:10:17.457 "name": "pt4", 00:10:17.457 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:17.457 "is_configured": true, 00:10:17.457 "data_offset": 2048, 00:10:17.457 "data_size": 63488 00:10:17.457 } 00:10:17.457 ] 00:10:17.457 }' 00:10:17.457 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.457 11:53:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.716 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:17.716 11:53:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.716 11:53:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.716 [2024-12-06 11:53:15.389381] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:17.716 [2024-12-06 11:53:15.389406] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.716 [2024-12-06 11:53:15.389465] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.716 [2024-12-06 11:53:15.389533] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.716 [2024-12-06 11:53:15.389542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:10:17.716 11:53:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.716 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.716 11:53:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.716 11:53:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.716 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:17.716 11:53:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.716 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:17.716 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:17.716 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:10:17.716 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:10:17.716 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:10:17.716 11:53:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.716 11:53:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.716 11:53:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.716 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:17.716 11:53:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.716 11:53:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.716 [2024-12-06 11:53:15.461241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:17.716 [2024-12-06 11:53:15.461296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.716 [2024-12-06 11:53:15.461315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:17.716 [2024-12-06 11:53:15.461323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.716 [2024-12-06 11:53:15.463410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.716 [2024-12-06 11:53:15.463443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:17.717 [2024-12-06 11:53:15.463502] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:17.717 [2024-12-06 11:53:15.463544] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:17.717 [2024-12-06 11:53:15.463658] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:17.717 [2024-12-06 11:53:15.463670] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:17.717 [2024-12-06 11:53:15.463685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:10:17.717 [2024-12-06 11:53:15.463711] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:17.717 [2024-12-06 11:53:15.463788] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:17.717 pt1 00:10:17.717 11:53:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.717 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:10:17.717 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:17.717 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.717 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.717 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.717 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.717 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.717 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.717 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.717 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.717 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.976 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.976 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.976 11:53:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.976 11:53:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.976 11:53:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.976 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.976 "name": "raid_bdev1", 00:10:17.976 "uuid": "596b7646-2732-4205-a347-92b55aa16af3", 00:10:17.976 "strip_size_kb": 0, 00:10:17.976 "state": "configuring", 00:10:17.976 "raid_level": "raid1", 00:10:17.976 "superblock": true, 00:10:17.976 "num_base_bdevs": 4, 00:10:17.976 "num_base_bdevs_discovered": 2, 00:10:17.976 "num_base_bdevs_operational": 3, 00:10:17.976 "base_bdevs_list": [ 00:10:17.976 { 00:10:17.976 "name": null, 00:10:17.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.976 "is_configured": false, 00:10:17.976 "data_offset": 2048, 00:10:17.976 "data_size": 63488 00:10:17.976 }, 00:10:17.976 { 00:10:17.976 "name": "pt2", 00:10:17.976 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.976 "is_configured": true, 00:10:17.976 "data_offset": 2048, 00:10:17.976 "data_size": 63488 00:10:17.976 }, 00:10:17.976 { 00:10:17.976 "name": "pt3", 00:10:17.976 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.976 "is_configured": true, 00:10:17.976 "data_offset": 2048, 00:10:17.976 "data_size": 63488 00:10:17.976 }, 00:10:17.976 { 00:10:17.976 "name": null, 00:10:17.976 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:17.976 "is_configured": false, 00:10:17.976 "data_offset": 2048, 00:10:17.976 "data_size": 63488 00:10:17.976 } 00:10:17.976 ] 00:10:17.976 }' 00:10:17.976 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.976 11:53:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.237 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:18.237 11:53:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.237 11:53:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.237 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:18.237 11:53:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.237 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:18.237 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:18.237 11:53:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.237 11:53:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.237 [2024-12-06 11:53:15.972380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:18.237 [2024-12-06 11:53:15.972480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.237 [2024-12-06 11:53:15.972517] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:10:18.237 [2024-12-06 11:53:15.972546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.237 [2024-12-06 11:53:15.972925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.237 [2024-12-06 11:53:15.972979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:18.237 [2024-12-06 11:53:15.973067] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:18.237 [2024-12-06 11:53:15.973115] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:18.237 [2024-12-06 11:53:15.973255] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:10:18.237 [2024-12-06 11:53:15.973298] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:18.237 [2024-12-06 11:53:15.973568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:10:18.237 [2024-12-06 11:53:15.973731] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:10:18.237 [2024-12-06 11:53:15.973771] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:10:18.237 [2024-12-06 11:53:15.973917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.237 pt4 00:10:18.237 11:53:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.237 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:18.237 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.237 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.237 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.237 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.237 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.237 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.237 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.237 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.237 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.237 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.237 11:53:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.237 11:53:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.237 11:53:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.497 11:53:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.497 11:53:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.497 "name": "raid_bdev1", 00:10:18.497 "uuid": "596b7646-2732-4205-a347-92b55aa16af3", 00:10:18.497 "strip_size_kb": 0, 00:10:18.497 "state": "online", 00:10:18.497 "raid_level": "raid1", 00:10:18.497 "superblock": true, 00:10:18.497 "num_base_bdevs": 4, 00:10:18.497 "num_base_bdevs_discovered": 3, 00:10:18.497 "num_base_bdevs_operational": 3, 00:10:18.497 "base_bdevs_list": [ 00:10:18.497 { 00:10:18.497 "name": null, 00:10:18.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.497 "is_configured": false, 00:10:18.497 "data_offset": 2048, 00:10:18.497 "data_size": 63488 00:10:18.497 }, 00:10:18.497 { 00:10:18.497 "name": "pt2", 00:10:18.497 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.497 "is_configured": true, 00:10:18.497 "data_offset": 2048, 00:10:18.497 "data_size": 63488 00:10:18.497 }, 00:10:18.497 { 00:10:18.497 "name": "pt3", 00:10:18.497 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.497 "is_configured": true, 00:10:18.497 "data_offset": 2048, 00:10:18.497 "data_size": 63488 00:10:18.497 }, 00:10:18.497 { 00:10:18.497 "name": "pt4", 00:10:18.497 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:18.497 "is_configured": true, 00:10:18.497 "data_offset": 2048, 00:10:18.497 "data_size": 63488 00:10:18.497 } 00:10:18.497 ] 00:10:18.497 }' 00:10:18.497 11:53:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.497 11:53:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.755 11:53:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:18.756 11:53:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:18.756 11:53:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.756 11:53:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.756 11:53:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.756 11:53:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:18.756 11:53:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:18.756 11:53:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:18.756 11:53:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.756 11:53:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.756 [2024-12-06 11:53:16.503748] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.015 11:53:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.015 11:53:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 596b7646-2732-4205-a347-92b55aa16af3 '!=' 596b7646-2732-4205-a347-92b55aa16af3 ']' 00:10:19.015 11:53:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84927 00:10:19.015 11:53:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 84927 ']' 00:10:19.015 11:53:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 84927 00:10:19.015 11:53:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:19.015 11:53:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:19.015 11:53:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84927 00:10:19.015 killing process with pid 84927 00:10:19.015 11:53:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:19.015 11:53:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:19.015 11:53:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84927' 00:10:19.015 11:53:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 84927 00:10:19.015 [2024-12-06 11:53:16.585269] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:19.015 [2024-12-06 11:53:16.585352] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.015 [2024-12-06 11:53:16.585428] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.015 [2024-12-06 11:53:16.585437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:10:19.015 11:53:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 84927 00:10:19.015 [2024-12-06 11:53:16.629658] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:19.274 11:53:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:19.274 00:10:19.274 real 0m7.160s 00:10:19.274 user 0m12.060s 00:10:19.274 sys 0m1.473s 00:10:19.274 11:53:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.274 11:53:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.274 ************************************ 00:10:19.274 END TEST raid_superblock_test 00:10:19.274 ************************************ 00:10:19.274 11:53:16 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:10:19.274 11:53:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:19.274 11:53:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.274 11:53:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:19.274 ************************************ 00:10:19.274 START TEST raid_read_error_test 00:10:19.274 ************************************ 00:10:19.274 11:53:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:10:19.274 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:19.274 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:19.274 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:19.274 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:19.274 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.274 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:19.274 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:19.274 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.274 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:19.275 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:19.275 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.275 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:19.275 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:19.275 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.275 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:19.275 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:19.275 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.275 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:19.275 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:19.275 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:19.275 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:19.275 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:19.275 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:19.275 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:19.275 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:19.275 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:19.275 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:19.275 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YzDP2MGN4S 00:10:19.275 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85403 00:10:19.275 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:19.275 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85403 00:10:19.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.275 11:53:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 85403 ']' 00:10:19.275 11:53:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.275 11:53:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:19.275 11:53:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.275 11:53:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:19.275 11:53:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.534 [2024-12-06 11:53:17.043351] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:19.534 [2024-12-06 11:53:17.043460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85403 ] 00:10:19.534 [2024-12-06 11:53:17.187388] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.534 [2024-12-06 11:53:17.230945] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.534 [2024-12-06 11:53:17.272967] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.534 [2024-12-06 11:53:17.273084] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.474 BaseBdev1_malloc 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.474 true 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.474 [2024-12-06 11:53:17.906768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:20.474 [2024-12-06 11:53:17.906871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.474 [2024-12-06 11:53:17.906915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:20.474 [2024-12-06 11:53:17.906945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.474 [2024-12-06 11:53:17.909044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.474 [2024-12-06 11:53:17.909131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:20.474 BaseBdev1 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.474 BaseBdev2_malloc 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.474 true 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.474 [2024-12-06 11:53:17.961007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:20.474 [2024-12-06 11:53:17.961099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.474 [2024-12-06 11:53:17.961124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:20.474 [2024-12-06 11:53:17.961131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.474 [2024-12-06 11:53:17.963158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.474 [2024-12-06 11:53:17.963192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:20.474 BaseBdev2 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.474 BaseBdev3_malloc 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.474 true 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.474 11:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.474 [2024-12-06 11:53:18.001364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:20.474 [2024-12-06 11:53:18.001408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.474 [2024-12-06 11:53:18.001426] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:20.474 [2024-12-06 11:53:18.001434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.474 [2024-12-06 11:53:18.003422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.474 [2024-12-06 11:53:18.003501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:20.474 BaseBdev3 00:10:20.474 11:53:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.474 11:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:20.474 11:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:20.474 11:53:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.474 11:53:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.474 BaseBdev4_malloc 00:10:20.474 11:53:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.474 11:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:20.474 11:53:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.474 11:53:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.474 true 00:10:20.474 11:53:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.474 11:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:20.474 11:53:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.474 11:53:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.474 [2024-12-06 11:53:18.041713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:20.474 [2024-12-06 11:53:18.041757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.474 [2024-12-06 11:53:18.041778] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:20.474 [2024-12-06 11:53:18.041786] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.474 [2024-12-06 11:53:18.043789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.474 [2024-12-06 11:53:18.043824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:20.474 BaseBdev4 00:10:20.474 11:53:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.474 11:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:20.474 11:53:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.474 11:53:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.474 [2024-12-06 11:53:18.053752] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:20.474 [2024-12-06 11:53:18.055585] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:20.474 [2024-12-06 11:53:18.055654] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:20.474 [2024-12-06 11:53:18.055712] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:20.475 [2024-12-06 11:53:18.055896] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:10:20.475 [2024-12-06 11:53:18.055907] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:20.475 [2024-12-06 11:53:18.056139] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:20.475 [2024-12-06 11:53:18.056284] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:10:20.475 [2024-12-06 11:53:18.056297] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:10:20.475 [2024-12-06 11:53:18.056418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.475 11:53:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.475 11:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:20.475 11:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.475 11:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.475 11:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.475 11:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.475 11:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.475 11:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.475 11:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.475 11:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.475 11:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.475 11:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.475 11:53:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.475 11:53:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.475 11:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.475 11:53:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.475 11:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.475 "name": "raid_bdev1", 00:10:20.475 "uuid": "c4cb42bd-237d-47d3-b3b6-e715146a7895", 00:10:20.475 "strip_size_kb": 0, 00:10:20.475 "state": "online", 00:10:20.475 "raid_level": "raid1", 00:10:20.475 "superblock": true, 00:10:20.475 "num_base_bdevs": 4, 00:10:20.475 "num_base_bdevs_discovered": 4, 00:10:20.475 "num_base_bdevs_operational": 4, 00:10:20.475 "base_bdevs_list": [ 00:10:20.475 { 00:10:20.475 "name": "BaseBdev1", 00:10:20.475 "uuid": "4e688cee-a170-5cf6-9982-cb07a68373bd", 00:10:20.475 "is_configured": true, 00:10:20.475 "data_offset": 2048, 00:10:20.475 "data_size": 63488 00:10:20.475 }, 00:10:20.475 { 00:10:20.475 "name": "BaseBdev2", 00:10:20.475 "uuid": "e3e5b5a4-59bd-5254-8d7a-e6494fdfac61", 00:10:20.475 "is_configured": true, 00:10:20.475 "data_offset": 2048, 00:10:20.475 "data_size": 63488 00:10:20.475 }, 00:10:20.475 { 00:10:20.475 "name": "BaseBdev3", 00:10:20.475 "uuid": "c3c172ed-bc46-5010-b931-523131f91ea6", 00:10:20.475 "is_configured": true, 00:10:20.475 "data_offset": 2048, 00:10:20.475 "data_size": 63488 00:10:20.475 }, 00:10:20.475 { 00:10:20.475 "name": "BaseBdev4", 00:10:20.475 "uuid": "1a652618-6793-51e7-ae8d-8b71acdd08e9", 00:10:20.475 "is_configured": true, 00:10:20.475 "data_offset": 2048, 00:10:20.475 "data_size": 63488 00:10:20.475 } 00:10:20.475 ] 00:10:20.475 }' 00:10:20.475 11:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.475 11:53:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.045 11:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:21.045 11:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:21.045 [2024-12-06 11:53:18.613135] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:10:21.983 11:53:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:21.983 11:53:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.983 11:53:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.983 11:53:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.983 11:53:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:21.983 11:53:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:21.983 11:53:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:21.983 11:53:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:21.983 11:53:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:21.983 11:53:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.983 11:53:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.983 11:53:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.983 11:53:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.983 11:53:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.983 11:53:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.983 11:53:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.983 11:53:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.983 11:53:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.983 11:53:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.983 11:53:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.983 11:53:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.983 11:53:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.983 11:53:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.983 11:53:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.983 "name": "raid_bdev1", 00:10:21.983 "uuid": "c4cb42bd-237d-47d3-b3b6-e715146a7895", 00:10:21.983 "strip_size_kb": 0, 00:10:21.983 "state": "online", 00:10:21.983 "raid_level": "raid1", 00:10:21.983 "superblock": true, 00:10:21.983 "num_base_bdevs": 4, 00:10:21.983 "num_base_bdevs_discovered": 4, 00:10:21.983 "num_base_bdevs_operational": 4, 00:10:21.983 "base_bdevs_list": [ 00:10:21.983 { 00:10:21.983 "name": "BaseBdev1", 00:10:21.983 "uuid": "4e688cee-a170-5cf6-9982-cb07a68373bd", 00:10:21.983 "is_configured": true, 00:10:21.983 "data_offset": 2048, 00:10:21.983 "data_size": 63488 00:10:21.983 }, 00:10:21.983 { 00:10:21.983 "name": "BaseBdev2", 00:10:21.983 "uuid": "e3e5b5a4-59bd-5254-8d7a-e6494fdfac61", 00:10:21.983 "is_configured": true, 00:10:21.983 "data_offset": 2048, 00:10:21.983 "data_size": 63488 00:10:21.983 }, 00:10:21.983 { 00:10:21.983 "name": "BaseBdev3", 00:10:21.983 "uuid": "c3c172ed-bc46-5010-b931-523131f91ea6", 00:10:21.983 "is_configured": true, 00:10:21.983 "data_offset": 2048, 00:10:21.983 "data_size": 63488 00:10:21.983 }, 00:10:21.983 { 00:10:21.983 "name": "BaseBdev4", 00:10:21.983 "uuid": "1a652618-6793-51e7-ae8d-8b71acdd08e9", 00:10:21.983 "is_configured": true, 00:10:21.983 "data_offset": 2048, 00:10:21.983 "data_size": 63488 00:10:21.983 } 00:10:21.983 ] 00:10:21.983 }' 00:10:21.983 11:53:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.983 11:53:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.253 11:53:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:22.253 11:53:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.253 11:53:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.253 [2024-12-06 11:53:19.967613] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:22.253 [2024-12-06 11:53:19.967643] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:22.253 [2024-12-06 11:53:19.970139] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.253 [2024-12-06 11:53:19.970226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.253 [2024-12-06 11:53:19.970390] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.253 [2024-12-06 11:53:19.970434] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:10:22.253 { 00:10:22.253 "results": [ 00:10:22.253 { 00:10:22.253 "job": "raid_bdev1", 00:10:22.253 "core_mask": "0x1", 00:10:22.253 "workload": "randrw", 00:10:22.253 "percentage": 50, 00:10:22.253 "status": "finished", 00:10:22.253 "queue_depth": 1, 00:10:22.253 "io_size": 131072, 00:10:22.253 "runtime": 1.355214, 00:10:22.253 "iops": 12043.116437699138, 00:10:22.253 "mibps": 1505.3895547123923, 00:10:22.253 "io_failed": 0, 00:10:22.253 "io_timeout": 0, 00:10:22.253 "avg_latency_us": 80.58517777482275, 00:10:22.253 "min_latency_us": 22.022707423580787, 00:10:22.253 "max_latency_us": 1452.380786026201 00:10:22.253 } 00:10:22.253 ], 00:10:22.253 "core_count": 1 00:10:22.253 } 00:10:22.253 11:53:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.253 11:53:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85403 00:10:22.253 11:53:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 85403 ']' 00:10:22.253 11:53:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 85403 00:10:22.253 11:53:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:22.253 11:53:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:22.253 11:53:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85403 00:10:22.528 killing process with pid 85403 00:10:22.528 11:53:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:22.528 11:53:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:22.528 11:53:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85403' 00:10:22.528 11:53:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 85403 00:10:22.528 [2024-12-06 11:53:20.006864] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.528 11:53:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 85403 00:10:22.528 [2024-12-06 11:53:20.043095] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:22.528 11:53:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YzDP2MGN4S 00:10:22.528 11:53:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:22.528 11:53:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:22.528 11:53:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:22.528 11:53:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:22.528 ************************************ 00:10:22.528 END TEST raid_read_error_test 00:10:22.528 ************************************ 00:10:22.528 11:53:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:22.528 11:53:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:22.528 11:53:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:22.528 00:10:22.528 real 0m3.334s 00:10:22.528 user 0m4.204s 00:10:22.528 sys 0m0.531s 00:10:22.528 11:53:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:22.528 11:53:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.789 11:53:20 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:10:22.789 11:53:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:22.789 11:53:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:22.789 11:53:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:22.789 ************************************ 00:10:22.789 START TEST raid_write_error_test 00:10:22.789 ************************************ 00:10:22.789 11:53:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:10:22.789 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:22.789 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:22.789 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:22.789 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:22.789 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.789 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:22.789 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:22.789 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.789 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:22.790 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:22.790 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.790 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:22.790 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:22.790 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.790 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:22.790 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:22.790 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.790 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:22.790 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:22.790 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:22.790 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:22.790 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:22.790 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:22.790 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:22.790 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:22.790 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:22.790 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:22.790 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fz5kajCtNs 00:10:22.790 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85532 00:10:22.790 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:22.790 11:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85532 00:10:22.790 11:53:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 85532 ']' 00:10:22.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.790 11:53:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.790 11:53:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:22.790 11:53:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.790 11:53:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:22.790 11:53:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.790 [2024-12-06 11:53:20.460046] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:22.790 [2024-12-06 11:53:20.460175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85532 ] 00:10:23.050 [2024-12-06 11:53:20.585049] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.050 [2024-12-06 11:53:20.627913] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.050 [2024-12-06 11:53:20.669326] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.050 [2024-12-06 11:53:20.669361] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.618 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:23.618 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:23.618 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:23.618 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:23.618 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.618 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.618 BaseBdev1_malloc 00:10:23.618 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.618 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:23.618 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.618 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.618 true 00:10:23.618 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.618 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:23.618 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.618 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.618 [2024-12-06 11:53:21.330536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:23.618 [2024-12-06 11:53:21.330592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.618 [2024-12-06 11:53:21.330616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:23.618 [2024-12-06 11:53:21.330625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.618 [2024-12-06 11:53:21.332695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.618 [2024-12-06 11:53:21.332735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:23.618 BaseBdev1 00:10:23.618 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.618 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:23.618 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:23.618 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.618 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.618 BaseBdev2_malloc 00:10:23.618 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.618 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:23.618 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.618 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.878 true 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.878 [2024-12-06 11:53:21.381118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:23.878 [2024-12-06 11:53:21.381252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.878 [2024-12-06 11:53:21.381277] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:23.878 [2024-12-06 11:53:21.381286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.878 [2024-12-06 11:53:21.383337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.878 [2024-12-06 11:53:21.383369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:23.878 BaseBdev2 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.878 BaseBdev3_malloc 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.878 true 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.878 [2024-12-06 11:53:21.421607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:23.878 [2024-12-06 11:53:21.421650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.878 [2024-12-06 11:53:21.421668] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:23.878 [2024-12-06 11:53:21.421676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.878 [2024-12-06 11:53:21.423654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.878 [2024-12-06 11:53:21.423740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:23.878 BaseBdev3 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.878 BaseBdev4_malloc 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.878 true 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.878 [2024-12-06 11:53:21.462011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:23.878 [2024-12-06 11:53:21.462056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.878 [2024-12-06 11:53:21.462076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:23.878 [2024-12-06 11:53:21.462085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.878 [2024-12-06 11:53:21.464151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.878 [2024-12-06 11:53:21.464184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:23.878 BaseBdev4 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.878 [2024-12-06 11:53:21.474046] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:23.878 [2024-12-06 11:53:21.475911] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:23.878 [2024-12-06 11:53:21.476037] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:23.878 [2024-12-06 11:53:21.476105] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:23.878 [2024-12-06 11:53:21.476304] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:10:23.878 [2024-12-06 11:53:21.476321] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:23.878 [2024-12-06 11:53:21.476553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:23.878 [2024-12-06 11:53:21.476687] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:10:23.878 [2024-12-06 11:53:21.476699] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:10:23.878 [2024-12-06 11:53:21.476812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.878 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.879 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.879 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.879 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.879 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.879 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.879 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.879 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.879 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.879 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.879 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.879 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.879 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.879 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.879 "name": "raid_bdev1", 00:10:23.879 "uuid": "7e31da33-a9e0-41ef-8800-08a3425650ed", 00:10:23.879 "strip_size_kb": 0, 00:10:23.879 "state": "online", 00:10:23.879 "raid_level": "raid1", 00:10:23.879 "superblock": true, 00:10:23.879 "num_base_bdevs": 4, 00:10:23.879 "num_base_bdevs_discovered": 4, 00:10:23.879 "num_base_bdevs_operational": 4, 00:10:23.879 "base_bdevs_list": [ 00:10:23.879 { 00:10:23.879 "name": "BaseBdev1", 00:10:23.879 "uuid": "ed6fa892-2464-5f17-b599-9161683efd1d", 00:10:23.879 "is_configured": true, 00:10:23.879 "data_offset": 2048, 00:10:23.879 "data_size": 63488 00:10:23.879 }, 00:10:23.879 { 00:10:23.879 "name": "BaseBdev2", 00:10:23.879 "uuid": "749cd0a5-c67b-59b9-b470-4f39e9584996", 00:10:23.879 "is_configured": true, 00:10:23.879 "data_offset": 2048, 00:10:23.879 "data_size": 63488 00:10:23.879 }, 00:10:23.879 { 00:10:23.879 "name": "BaseBdev3", 00:10:23.879 "uuid": "288a4d55-81ef-5aff-9a27-f2fe0a7789b3", 00:10:23.879 "is_configured": true, 00:10:23.879 "data_offset": 2048, 00:10:23.879 "data_size": 63488 00:10:23.879 }, 00:10:23.879 { 00:10:23.879 "name": "BaseBdev4", 00:10:23.879 "uuid": "af99ce73-da41-53d9-9d88-f8f32759c491", 00:10:23.879 "is_configured": true, 00:10:23.879 "data_offset": 2048, 00:10:23.879 "data_size": 63488 00:10:23.879 } 00:10:23.879 ] 00:10:23.879 }' 00:10:23.879 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.879 11:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.448 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:24.448 11:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:24.448 [2024-12-06 11:53:21.997491] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:10:25.385 11:53:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:25.385 11:53:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.385 11:53:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.385 [2024-12-06 11:53:22.915832] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:25.385 [2024-12-06 11:53:22.915981] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:25.385 [2024-12-06 11:53:22.916243] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:10:25.385 11:53:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.385 11:53:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:25.385 11:53:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:25.385 11:53:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:25.385 11:53:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:10:25.385 11:53:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:25.385 11:53:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.385 11:53:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.385 11:53:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.385 11:53:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.385 11:53:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.385 11:53:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.385 11:53:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.385 11:53:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.385 11:53:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.385 11:53:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.385 11:53:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.385 11:53:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.385 11:53:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.385 11:53:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.385 11:53:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.385 "name": "raid_bdev1", 00:10:25.385 "uuid": "7e31da33-a9e0-41ef-8800-08a3425650ed", 00:10:25.385 "strip_size_kb": 0, 00:10:25.385 "state": "online", 00:10:25.385 "raid_level": "raid1", 00:10:25.385 "superblock": true, 00:10:25.385 "num_base_bdevs": 4, 00:10:25.385 "num_base_bdevs_discovered": 3, 00:10:25.385 "num_base_bdevs_operational": 3, 00:10:25.385 "base_bdevs_list": [ 00:10:25.385 { 00:10:25.385 "name": null, 00:10:25.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.385 "is_configured": false, 00:10:25.385 "data_offset": 0, 00:10:25.385 "data_size": 63488 00:10:25.385 }, 00:10:25.385 { 00:10:25.385 "name": "BaseBdev2", 00:10:25.385 "uuid": "749cd0a5-c67b-59b9-b470-4f39e9584996", 00:10:25.385 "is_configured": true, 00:10:25.385 "data_offset": 2048, 00:10:25.385 "data_size": 63488 00:10:25.385 }, 00:10:25.385 { 00:10:25.385 "name": "BaseBdev3", 00:10:25.385 "uuid": "288a4d55-81ef-5aff-9a27-f2fe0a7789b3", 00:10:25.385 "is_configured": true, 00:10:25.385 "data_offset": 2048, 00:10:25.385 "data_size": 63488 00:10:25.385 }, 00:10:25.385 { 00:10:25.385 "name": "BaseBdev4", 00:10:25.385 "uuid": "af99ce73-da41-53d9-9d88-f8f32759c491", 00:10:25.385 "is_configured": true, 00:10:25.385 "data_offset": 2048, 00:10:25.385 "data_size": 63488 00:10:25.385 } 00:10:25.385 ] 00:10:25.385 }' 00:10:25.385 11:53:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.385 11:53:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.954 11:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:25.954 11:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.954 11:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.954 [2024-12-06 11:53:23.419984] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:25.954 [2024-12-06 11:53:23.420080] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:25.954 [2024-12-06 11:53:23.422572] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.954 [2024-12-06 11:53:23.422661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.954 [2024-12-06 11:53:23.422776] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.954 [2024-12-06 11:53:23.422816] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:10:25.954 11:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.954 { 00:10:25.954 "results": [ 00:10:25.954 { 00:10:25.954 "job": "raid_bdev1", 00:10:25.954 "core_mask": "0x1", 00:10:25.954 "workload": "randrw", 00:10:25.954 "percentage": 50, 00:10:25.954 "status": "finished", 00:10:25.954 "queue_depth": 1, 00:10:25.954 "io_size": 131072, 00:10:25.954 "runtime": 1.423496, 00:10:25.954 "iops": 13076.257327031477, 00:10:25.954 "mibps": 1634.5321658789346, 00:10:25.954 "io_failed": 0, 00:10:25.954 "io_timeout": 0, 00:10:25.954 "avg_latency_us": 73.98407603236143, 00:10:25.954 "min_latency_us": 21.799126637554586, 00:10:25.954 "max_latency_us": 1566.8541484716156 00:10:25.954 } 00:10:25.954 ], 00:10:25.954 "core_count": 1 00:10:25.954 } 00:10:25.954 11:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85532 00:10:25.954 11:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 85532 ']' 00:10:25.954 11:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 85532 00:10:25.954 11:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:25.954 11:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:25.954 11:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85532 00:10:25.954 11:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:25.954 11:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:25.954 killing process with pid 85532 00:10:25.954 11:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85532' 00:10:25.954 11:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 85532 00:10:25.954 [2024-12-06 11:53:23.469607] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:25.954 11:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 85532 00:10:25.954 [2024-12-06 11:53:23.505165] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:26.214 11:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fz5kajCtNs 00:10:26.214 11:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:26.214 11:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:26.214 11:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:26.214 11:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:26.214 ************************************ 00:10:26.214 END TEST raid_write_error_test 00:10:26.214 ************************************ 00:10:26.214 11:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:26.214 11:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:26.214 11:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:26.214 00:10:26.214 real 0m3.387s 00:10:26.214 user 0m4.332s 00:10:26.214 sys 0m0.500s 00:10:26.214 11:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:26.214 11:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.214 11:53:23 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:10:26.214 11:53:23 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:10:26.214 11:53:23 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:10:26.214 11:53:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:10:26.214 11:53:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.214 11:53:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:26.214 ************************************ 00:10:26.214 START TEST raid_rebuild_test 00:10:26.214 ************************************ 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85665 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85665 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 85665 ']' 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:26.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:26.214 11:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.214 [2024-12-06 11:53:23.913596] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:26.214 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:26.214 Zero copy mechanism will not be used. 00:10:26.214 [2024-12-06 11:53:23.913813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85665 ] 00:10:26.474 [2024-12-06 11:53:24.058830] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.474 [2024-12-06 11:53:24.102268] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.474 [2024-12-06 11:53:24.143841] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.474 [2024-12-06 11:53:24.143875] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.043 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:27.043 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:10:27.043 11:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:27.043 11:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:27.043 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.043 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.043 BaseBdev1_malloc 00:10:27.043 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.043 11:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:27.043 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.043 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.043 [2024-12-06 11:53:24.753680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:27.043 [2024-12-06 11:53:24.753741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.043 [2024-12-06 11:53:24.753764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:10:27.043 [2024-12-06 11:53:24.753784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.043 [2024-12-06 11:53:24.755810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.043 [2024-12-06 11:53:24.755849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:27.043 BaseBdev1 00:10:27.043 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.043 11:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:27.043 11:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:27.043 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.043 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.043 BaseBdev2_malloc 00:10:27.043 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.043 11:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:10:27.043 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.043 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.043 [2024-12-06 11:53:24.793107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:10:27.043 [2024-12-06 11:53:24.793208] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.043 [2024-12-06 11:53:24.793247] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:27.043 [2024-12-06 11:53:24.793256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.043 [2024-12-06 11:53:24.795291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.043 [2024-12-06 11:53:24.795330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:27.303 BaseBdev2 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.303 spare_malloc 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.303 spare_delay 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.303 [2024-12-06 11:53:24.833456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:27.303 [2024-12-06 11:53:24.833504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.303 [2024-12-06 11:53:24.833524] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:27.303 [2024-12-06 11:53:24.833532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.303 [2024-12-06 11:53:24.835564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.303 [2024-12-06 11:53:24.835599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:27.303 spare 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.303 [2024-12-06 11:53:24.845492] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:27.303 [2024-12-06 11:53:24.847325] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:27.303 [2024-12-06 11:53:24.847411] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:10:27.303 [2024-12-06 11:53:24.847424] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:27.303 [2024-12-06 11:53:24.847677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:10:27.303 [2024-12-06 11:53:24.847795] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:10:27.303 [2024-12-06 11:53:24.847807] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:10:27.303 [2024-12-06 11:53:24.847917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.303 11:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.303 "name": "raid_bdev1", 00:10:27.303 "uuid": "557d0059-281d-4140-9983-747192dc20cc", 00:10:27.303 "strip_size_kb": 0, 00:10:27.303 "state": "online", 00:10:27.303 "raid_level": "raid1", 00:10:27.303 "superblock": false, 00:10:27.303 "num_base_bdevs": 2, 00:10:27.303 "num_base_bdevs_discovered": 2, 00:10:27.304 "num_base_bdevs_operational": 2, 00:10:27.304 "base_bdevs_list": [ 00:10:27.304 { 00:10:27.304 "name": "BaseBdev1", 00:10:27.304 "uuid": "ca56df4a-efde-5d86-9afb-4fc9adbcf943", 00:10:27.304 "is_configured": true, 00:10:27.304 "data_offset": 0, 00:10:27.304 "data_size": 65536 00:10:27.304 }, 00:10:27.304 { 00:10:27.304 "name": "BaseBdev2", 00:10:27.304 "uuid": "17bcb790-419e-51be-8eca-ef9e5403fcee", 00:10:27.304 "is_configured": true, 00:10:27.304 "data_offset": 0, 00:10:27.304 "data_size": 65536 00:10:27.304 } 00:10:27.304 ] 00:10:27.304 }' 00:10:27.304 11:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.304 11:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.883 11:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:27.883 11:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:10:27.883 11:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.883 11:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.883 [2024-12-06 11:53:25.336871] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:27.883 11:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.883 11:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:10:27.883 11:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:10:27.883 11:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.883 11:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.883 11:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.883 11:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.883 11:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:10:27.883 11:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:10:27.883 11:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:10:27.883 11:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:10:27.883 11:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:10:27.883 11:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:27.883 11:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:10:27.883 11:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:27.883 11:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:27.883 11:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:27.883 11:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:10:27.883 11:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:27.883 11:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:27.883 11:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:10:27.883 [2024-12-06 11:53:25.612287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:27.883 /dev/nbd0 00:10:28.141 11:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:28.141 11:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:28.141 11:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:28.141 11:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:10:28.141 11:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:28.141 11:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:28.141 11:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:28.141 11:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:10:28.141 11:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:28.141 11:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:28.141 11:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:28.141 1+0 records in 00:10:28.141 1+0 records out 00:10:28.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436038 s, 9.4 MB/s 00:10:28.141 11:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:28.141 11:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:10:28.141 11:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:28.141 11:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:28.141 11:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:10:28.141 11:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:28.141 11:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:28.141 11:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:10:28.141 11:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:10:28.141 11:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:10:32.328 65536+0 records in 00:10:32.328 65536+0 records out 00:10:32.328 33554432 bytes (34 MB, 32 MiB) copied, 3.69032 s, 9.1 MB/s 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:32.328 [2024-12-06 11:53:29.565723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.328 [2024-12-06 11:53:29.604005] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.328 "name": "raid_bdev1", 00:10:32.328 "uuid": "557d0059-281d-4140-9983-747192dc20cc", 00:10:32.328 "strip_size_kb": 0, 00:10:32.328 "state": "online", 00:10:32.328 "raid_level": "raid1", 00:10:32.328 "superblock": false, 00:10:32.328 "num_base_bdevs": 2, 00:10:32.328 "num_base_bdevs_discovered": 1, 00:10:32.328 "num_base_bdevs_operational": 1, 00:10:32.328 "base_bdevs_list": [ 00:10:32.328 { 00:10:32.328 "name": null, 00:10:32.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.328 "is_configured": false, 00:10:32.328 "data_offset": 0, 00:10:32.328 "data_size": 65536 00:10:32.328 }, 00:10:32.328 { 00:10:32.328 "name": "BaseBdev2", 00:10:32.328 "uuid": "17bcb790-419e-51be-8eca-ef9e5403fcee", 00:10:32.328 "is_configured": true, 00:10:32.328 "data_offset": 0, 00:10:32.328 "data_size": 65536 00:10:32.328 } 00:10:32.328 ] 00:10:32.328 }' 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.328 11:53:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.328 11:53:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:32.328 11:53:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.328 11:53:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.328 [2024-12-06 11:53:30.019368] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:32.328 [2024-12-06 11:53:30.023520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06220 00:10:32.328 11:53:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.328 11:53:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:10:32.328 [2024-12-06 11:53:30.025488] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:33.708 "name": "raid_bdev1", 00:10:33.708 "uuid": "557d0059-281d-4140-9983-747192dc20cc", 00:10:33.708 "strip_size_kb": 0, 00:10:33.708 "state": "online", 00:10:33.708 "raid_level": "raid1", 00:10:33.708 "superblock": false, 00:10:33.708 "num_base_bdevs": 2, 00:10:33.708 "num_base_bdevs_discovered": 2, 00:10:33.708 "num_base_bdevs_operational": 2, 00:10:33.708 "process": { 00:10:33.708 "type": "rebuild", 00:10:33.708 "target": "spare", 00:10:33.708 "progress": { 00:10:33.708 "blocks": 20480, 00:10:33.708 "percent": 31 00:10:33.708 } 00:10:33.708 }, 00:10:33.708 "base_bdevs_list": [ 00:10:33.708 { 00:10:33.708 "name": "spare", 00:10:33.708 "uuid": "48b662b2-beb5-5ba3-b284-b043020d839d", 00:10:33.708 "is_configured": true, 00:10:33.708 "data_offset": 0, 00:10:33.708 "data_size": 65536 00:10:33.708 }, 00:10:33.708 { 00:10:33.708 "name": "BaseBdev2", 00:10:33.708 "uuid": "17bcb790-419e-51be-8eca-ef9e5403fcee", 00:10:33.708 "is_configured": true, 00:10:33.708 "data_offset": 0, 00:10:33.708 "data_size": 65536 00:10:33.708 } 00:10:33.708 ] 00:10:33.708 }' 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.708 [2024-12-06 11:53:31.186001] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:33.708 [2024-12-06 11:53:31.229793] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:33.708 [2024-12-06 11:53:31.229919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.708 [2024-12-06 11:53:31.229960] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:33.708 [2024-12-06 11:53:31.229982] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.708 "name": "raid_bdev1", 00:10:33.708 "uuid": "557d0059-281d-4140-9983-747192dc20cc", 00:10:33.708 "strip_size_kb": 0, 00:10:33.708 "state": "online", 00:10:33.708 "raid_level": "raid1", 00:10:33.708 "superblock": false, 00:10:33.708 "num_base_bdevs": 2, 00:10:33.708 "num_base_bdevs_discovered": 1, 00:10:33.708 "num_base_bdevs_operational": 1, 00:10:33.708 "base_bdevs_list": [ 00:10:33.708 { 00:10:33.708 "name": null, 00:10:33.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.708 "is_configured": false, 00:10:33.708 "data_offset": 0, 00:10:33.708 "data_size": 65536 00:10:33.708 }, 00:10:33.708 { 00:10:33.708 "name": "BaseBdev2", 00:10:33.708 "uuid": "17bcb790-419e-51be-8eca-ef9e5403fcee", 00:10:33.708 "is_configured": true, 00:10:33.708 "data_offset": 0, 00:10:33.708 "data_size": 65536 00:10:33.708 } 00:10:33.708 ] 00:10:33.708 }' 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.708 11:53:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.968 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:33.968 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:33.968 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:33.968 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:33.968 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:33.968 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.968 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.968 11:53:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.968 11:53:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.968 11:53:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.968 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:33.968 "name": "raid_bdev1", 00:10:33.968 "uuid": "557d0059-281d-4140-9983-747192dc20cc", 00:10:33.968 "strip_size_kb": 0, 00:10:33.968 "state": "online", 00:10:33.968 "raid_level": "raid1", 00:10:33.968 "superblock": false, 00:10:33.968 "num_base_bdevs": 2, 00:10:33.968 "num_base_bdevs_discovered": 1, 00:10:33.968 "num_base_bdevs_operational": 1, 00:10:33.968 "base_bdevs_list": [ 00:10:33.968 { 00:10:33.968 "name": null, 00:10:33.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.968 "is_configured": false, 00:10:33.968 "data_offset": 0, 00:10:33.968 "data_size": 65536 00:10:33.968 }, 00:10:33.968 { 00:10:33.968 "name": "BaseBdev2", 00:10:33.968 "uuid": "17bcb790-419e-51be-8eca-ef9e5403fcee", 00:10:33.968 "is_configured": true, 00:10:33.968 "data_offset": 0, 00:10:33.968 "data_size": 65536 00:10:33.968 } 00:10:33.968 ] 00:10:33.968 }' 00:10:33.968 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:34.228 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:34.228 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:34.228 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:34.228 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:34.228 11:53:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.228 11:53:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.228 [2024-12-06 11:53:31.809378] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:34.228 [2024-12-06 11:53:31.813392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d062f0 00:10:34.228 11:53:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.228 11:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:10:34.228 [2024-12-06 11:53:31.815317] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:35.168 11:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:35.168 11:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:35.168 11:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:35.168 11:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:35.168 11:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:35.168 11:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.168 11:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.168 11:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.168 11:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.168 11:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.168 11:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:35.168 "name": "raid_bdev1", 00:10:35.168 "uuid": "557d0059-281d-4140-9983-747192dc20cc", 00:10:35.168 "strip_size_kb": 0, 00:10:35.168 "state": "online", 00:10:35.168 "raid_level": "raid1", 00:10:35.168 "superblock": false, 00:10:35.168 "num_base_bdevs": 2, 00:10:35.168 "num_base_bdevs_discovered": 2, 00:10:35.168 "num_base_bdevs_operational": 2, 00:10:35.168 "process": { 00:10:35.168 "type": "rebuild", 00:10:35.168 "target": "spare", 00:10:35.168 "progress": { 00:10:35.168 "blocks": 20480, 00:10:35.168 "percent": 31 00:10:35.168 } 00:10:35.168 }, 00:10:35.168 "base_bdevs_list": [ 00:10:35.168 { 00:10:35.168 "name": "spare", 00:10:35.168 "uuid": "48b662b2-beb5-5ba3-b284-b043020d839d", 00:10:35.168 "is_configured": true, 00:10:35.168 "data_offset": 0, 00:10:35.168 "data_size": 65536 00:10:35.168 }, 00:10:35.168 { 00:10:35.168 "name": "BaseBdev2", 00:10:35.168 "uuid": "17bcb790-419e-51be-8eca-ef9e5403fcee", 00:10:35.168 "is_configured": true, 00:10:35.168 "data_offset": 0, 00:10:35.168 "data_size": 65536 00:10:35.168 } 00:10:35.168 ] 00:10:35.168 }' 00:10:35.168 11:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:35.168 11:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:35.168 11:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:35.429 11:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:35.429 11:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:10:35.429 11:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:10:35.429 11:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:10:35.429 11:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:10:35.429 11:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=286 00:10:35.429 11:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:35.429 11:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:35.429 11:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:35.429 11:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:35.429 11:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:35.429 11:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:35.429 11:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.429 11:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.429 11:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.429 11:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.429 11:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.429 11:53:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:35.429 "name": "raid_bdev1", 00:10:35.429 "uuid": "557d0059-281d-4140-9983-747192dc20cc", 00:10:35.429 "strip_size_kb": 0, 00:10:35.429 "state": "online", 00:10:35.429 "raid_level": "raid1", 00:10:35.429 "superblock": false, 00:10:35.429 "num_base_bdevs": 2, 00:10:35.429 "num_base_bdevs_discovered": 2, 00:10:35.429 "num_base_bdevs_operational": 2, 00:10:35.429 "process": { 00:10:35.429 "type": "rebuild", 00:10:35.429 "target": "spare", 00:10:35.429 "progress": { 00:10:35.429 "blocks": 22528, 00:10:35.429 "percent": 34 00:10:35.429 } 00:10:35.429 }, 00:10:35.429 "base_bdevs_list": [ 00:10:35.429 { 00:10:35.429 "name": "spare", 00:10:35.429 "uuid": "48b662b2-beb5-5ba3-b284-b043020d839d", 00:10:35.429 "is_configured": true, 00:10:35.429 "data_offset": 0, 00:10:35.429 "data_size": 65536 00:10:35.429 }, 00:10:35.429 { 00:10:35.429 "name": "BaseBdev2", 00:10:35.429 "uuid": "17bcb790-419e-51be-8eca-ef9e5403fcee", 00:10:35.429 "is_configured": true, 00:10:35.429 "data_offset": 0, 00:10:35.429 "data_size": 65536 00:10:35.429 } 00:10:35.429 ] 00:10:35.429 }' 00:10:35.429 11:53:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:35.429 11:53:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:35.429 11:53:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:35.429 11:53:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:35.429 11:53:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:36.368 11:53:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:36.368 11:53:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:36.368 11:53:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:36.368 11:53:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:36.368 11:53:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:36.368 11:53:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:36.368 11:53:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.368 11:53:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.368 11:53:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.368 11:53:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.626 11:53:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.627 11:53:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:36.627 "name": "raid_bdev1", 00:10:36.627 "uuid": "557d0059-281d-4140-9983-747192dc20cc", 00:10:36.627 "strip_size_kb": 0, 00:10:36.627 "state": "online", 00:10:36.627 "raid_level": "raid1", 00:10:36.627 "superblock": false, 00:10:36.627 "num_base_bdevs": 2, 00:10:36.627 "num_base_bdevs_discovered": 2, 00:10:36.627 "num_base_bdevs_operational": 2, 00:10:36.627 "process": { 00:10:36.627 "type": "rebuild", 00:10:36.627 "target": "spare", 00:10:36.627 "progress": { 00:10:36.627 "blocks": 47104, 00:10:36.627 "percent": 71 00:10:36.627 } 00:10:36.627 }, 00:10:36.627 "base_bdevs_list": [ 00:10:36.627 { 00:10:36.627 "name": "spare", 00:10:36.627 "uuid": "48b662b2-beb5-5ba3-b284-b043020d839d", 00:10:36.627 "is_configured": true, 00:10:36.627 "data_offset": 0, 00:10:36.627 "data_size": 65536 00:10:36.627 }, 00:10:36.627 { 00:10:36.627 "name": "BaseBdev2", 00:10:36.627 "uuid": "17bcb790-419e-51be-8eca-ef9e5403fcee", 00:10:36.627 "is_configured": true, 00:10:36.627 "data_offset": 0, 00:10:36.627 "data_size": 65536 00:10:36.627 } 00:10:36.627 ] 00:10:36.627 }' 00:10:36.627 11:53:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:36.627 11:53:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:36.627 11:53:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:36.627 11:53:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:36.627 11:53:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:37.563 [2024-12-06 11:53:35.025472] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:10:37.563 [2024-12-06 11:53:35.025598] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:10:37.563 [2024-12-06 11:53:35.025665] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:37.563 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:37.563 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:37.563 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:37.563 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:37.563 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:37.563 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:37.563 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.563 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.563 11:53:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.563 11:53:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.563 11:53:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.563 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:37.563 "name": "raid_bdev1", 00:10:37.563 "uuid": "557d0059-281d-4140-9983-747192dc20cc", 00:10:37.563 "strip_size_kb": 0, 00:10:37.563 "state": "online", 00:10:37.563 "raid_level": "raid1", 00:10:37.563 "superblock": false, 00:10:37.563 "num_base_bdevs": 2, 00:10:37.563 "num_base_bdevs_discovered": 2, 00:10:37.563 "num_base_bdevs_operational": 2, 00:10:37.563 "base_bdevs_list": [ 00:10:37.563 { 00:10:37.563 "name": "spare", 00:10:37.563 "uuid": "48b662b2-beb5-5ba3-b284-b043020d839d", 00:10:37.563 "is_configured": true, 00:10:37.563 "data_offset": 0, 00:10:37.563 "data_size": 65536 00:10:37.563 }, 00:10:37.563 { 00:10:37.563 "name": "BaseBdev2", 00:10:37.563 "uuid": "17bcb790-419e-51be-8eca-ef9e5403fcee", 00:10:37.563 "is_configured": true, 00:10:37.563 "data_offset": 0, 00:10:37.563 "data_size": 65536 00:10:37.563 } 00:10:37.563 ] 00:10:37.563 }' 00:10:37.563 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:37.849 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:10:37.849 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:37.849 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:10:37.849 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:10:37.849 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:37.849 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:37.849 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:37.849 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:37.849 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:37.849 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.849 11:53:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.849 11:53:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.849 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.849 11:53:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.849 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:37.849 "name": "raid_bdev1", 00:10:37.849 "uuid": "557d0059-281d-4140-9983-747192dc20cc", 00:10:37.849 "strip_size_kb": 0, 00:10:37.849 "state": "online", 00:10:37.849 "raid_level": "raid1", 00:10:37.849 "superblock": false, 00:10:37.849 "num_base_bdevs": 2, 00:10:37.849 "num_base_bdevs_discovered": 2, 00:10:37.849 "num_base_bdevs_operational": 2, 00:10:37.849 "base_bdevs_list": [ 00:10:37.849 { 00:10:37.849 "name": "spare", 00:10:37.849 "uuid": "48b662b2-beb5-5ba3-b284-b043020d839d", 00:10:37.849 "is_configured": true, 00:10:37.849 "data_offset": 0, 00:10:37.849 "data_size": 65536 00:10:37.849 }, 00:10:37.849 { 00:10:37.849 "name": "BaseBdev2", 00:10:37.849 "uuid": "17bcb790-419e-51be-8eca-ef9e5403fcee", 00:10:37.849 "is_configured": true, 00:10:37.849 "data_offset": 0, 00:10:37.849 "data_size": 65536 00:10:37.849 } 00:10:37.849 ] 00:10:37.849 }' 00:10:37.849 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:37.849 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:37.849 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:37.849 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:37.849 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:37.849 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.849 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:37.849 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.849 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.849 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:37.849 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.849 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.849 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.850 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.850 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.850 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.850 11:53:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.850 11:53:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.850 11:53:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.850 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.850 "name": "raid_bdev1", 00:10:37.850 "uuid": "557d0059-281d-4140-9983-747192dc20cc", 00:10:37.850 "strip_size_kb": 0, 00:10:37.850 "state": "online", 00:10:37.850 "raid_level": "raid1", 00:10:37.850 "superblock": false, 00:10:37.850 "num_base_bdevs": 2, 00:10:37.850 "num_base_bdevs_discovered": 2, 00:10:37.850 "num_base_bdevs_operational": 2, 00:10:37.850 "base_bdevs_list": [ 00:10:37.850 { 00:10:37.850 "name": "spare", 00:10:37.850 "uuid": "48b662b2-beb5-5ba3-b284-b043020d839d", 00:10:37.850 "is_configured": true, 00:10:37.850 "data_offset": 0, 00:10:37.850 "data_size": 65536 00:10:37.850 }, 00:10:37.850 { 00:10:37.850 "name": "BaseBdev2", 00:10:37.850 "uuid": "17bcb790-419e-51be-8eca-ef9e5403fcee", 00:10:37.850 "is_configured": true, 00:10:37.850 "data_offset": 0, 00:10:37.850 "data_size": 65536 00:10:37.850 } 00:10:37.850 ] 00:10:37.850 }' 00:10:37.850 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.850 11:53:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.418 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:38.418 11:53:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.418 11:53:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.418 [2024-12-06 11:53:35.979993] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:38.418 [2024-12-06 11:53:35.980071] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:38.418 [2024-12-06 11:53:35.980172] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:38.418 [2024-12-06 11:53:35.980287] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:38.418 [2024-12-06 11:53:35.980337] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:10:38.418 11:53:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.418 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.418 11:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:10:38.418 11:53:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.418 11:53:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.418 11:53:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.418 11:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:10:38.418 11:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:10:38.418 11:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:10:38.418 11:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:10:38.418 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:38.418 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:10:38.418 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:38.418 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:38.418 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:38.418 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:10:38.418 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:38.418 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:38.418 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:10:38.677 /dev/nbd0 00:10:38.677 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:38.677 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:38.677 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:38.677 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:10:38.677 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:38.677 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:38.677 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:38.677 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:10:38.677 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:38.677 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:38.677 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:38.677 1+0 records in 00:10:38.677 1+0 records out 00:10:38.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176722 s, 23.2 MB/s 00:10:38.677 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.677 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:10:38.677 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.677 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:38.677 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:10:38.677 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:38.677 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:38.677 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:10:38.935 /dev/nbd1 00:10:38.935 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:38.935 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:38.935 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:10:38.935 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:10:38.935 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:38.935 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:38.935 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:10:38.935 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:10:38.935 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:38.936 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:38.936 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:38.936 1+0 records in 00:10:38.936 1+0 records out 00:10:38.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469862 s, 8.7 MB/s 00:10:38.936 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.936 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:10:38.936 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.936 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:38.936 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:10:38.936 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:38.936 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:38.936 11:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:10:38.936 11:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:10:38.936 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:38.936 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:38.936 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:38.936 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:10:38.936 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:38.936 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:39.195 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:39.195 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:39.195 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:39.195 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:39.195 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:39.195 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:39.195 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:39.195 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:39.195 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:39.195 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:10:39.454 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:39.454 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:39.454 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:39.454 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:39.454 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:39.454 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:39.454 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:39.454 11:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:39.454 11:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:10:39.454 11:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85665 00:10:39.454 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 85665 ']' 00:10:39.454 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 85665 00:10:39.454 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:10:39.454 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:39.454 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85665 00:10:39.454 killing process with pid 85665 00:10:39.454 Received shutdown signal, test time was about 60.000000 seconds 00:10:39.454 00:10:39.454 Latency(us) 00:10:39.454 [2024-12-06T11:53:37.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:39.454 [2024-12-06T11:53:37.212Z] =================================================================================================================== 00:10:39.454 [2024-12-06T11:53:37.212Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:39.454 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:39.454 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:39.454 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85665' 00:10:39.454 11:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 85665 00:10:39.454 [2024-12-06 11:53:37.001407] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:39.454 11:53:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 85665 00:10:39.454 [2024-12-06 11:53:37.032169] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:39.714 ************************************ 00:10:39.714 END TEST raid_rebuild_test 00:10:39.714 ************************************ 00:10:39.714 11:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:10:39.714 00:10:39.714 real 0m13.436s 00:10:39.714 user 0m15.562s 00:10:39.714 sys 0m2.756s 00:10:39.714 11:53:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:39.714 11:53:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.714 11:53:37 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:10:39.714 11:53:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:10:39.714 11:53:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:39.714 11:53:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:39.714 ************************************ 00:10:39.714 START TEST raid_rebuild_test_sb 00:10:39.714 ************************************ 00:10:39.714 11:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:10:39.714 11:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:10:39.714 11:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:10:39.714 11:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:10:39.714 11:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:10:39.714 11:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:10:39.714 11:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:10:39.714 11:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:39.714 11:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:10:39.714 11:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:39.714 11:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:39.714 11:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:10:39.714 11:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:39.714 11:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:39.714 11:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:39.714 11:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:10:39.714 11:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:10:39.714 11:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:10:39.714 11:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:10:39.714 11:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:10:39.714 11:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:10:39.714 11:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:10:39.714 11:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:10:39.714 11:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:10:39.714 11:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:10:39.714 11:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86065 00:10:39.715 11:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:10:39.715 11:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86065 00:10:39.715 11:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 86065 ']' 00:10:39.715 11:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.715 11:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:39.715 11:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.715 11:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:39.715 11:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.715 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:39.715 Zero copy mechanism will not be used. 00:10:39.715 [2024-12-06 11:53:37.422063] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:39.715 [2024-12-06 11:53:37.422181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86065 ] 00:10:39.974 [2024-12-06 11:53:37.546458] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.974 [2024-12-06 11:53:37.588830] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.974 [2024-12-06 11:53:37.630463] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.974 [2024-12-06 11:53:37.630499] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.543 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:40.543 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:40.543 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:40.543 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:40.543 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.543 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.543 BaseBdev1_malloc 00:10:40.543 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.543 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:40.543 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.543 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.543 [2024-12-06 11:53:38.259894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:40.543 [2024-12-06 11:53:38.259950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.543 [2024-12-06 11:53:38.259990] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:10:40.543 [2024-12-06 11:53:38.260011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.543 [2024-12-06 11:53:38.262037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.543 [2024-12-06 11:53:38.262083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:40.543 BaseBdev1 00:10:40.543 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.543 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:40.543 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:40.543 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.543 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.543 BaseBdev2_malloc 00:10:40.543 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.543 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:10:40.543 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.543 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.802 [2024-12-06 11:53:38.304146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:10:40.802 [2024-12-06 11:53:38.304274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.802 [2024-12-06 11:53:38.304321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:40.802 [2024-12-06 11:53:38.304344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.802 [2024-12-06 11:53:38.309005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.802 [2024-12-06 11:53:38.309104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:40.802 BaseBdev2 00:10:40.802 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.802 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:10:40.802 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.802 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.802 spare_malloc 00:10:40.802 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.802 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:10:40.802 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.802 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.802 spare_delay 00:10:40.802 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.802 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:40.802 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.802 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.802 [2024-12-06 11:53:38.347046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:40.802 [2024-12-06 11:53:38.347098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.802 [2024-12-06 11:53:38.347119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:40.802 [2024-12-06 11:53:38.347127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.802 [2024-12-06 11:53:38.349186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.802 [2024-12-06 11:53:38.349302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:40.802 spare 00:10:40.803 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.803 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:10:40.803 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.803 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.803 [2024-12-06 11:53:38.359076] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:40.803 [2024-12-06 11:53:38.360863] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:40.803 [2024-12-06 11:53:38.361066] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:10:40.803 [2024-12-06 11:53:38.361082] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:40.803 [2024-12-06 11:53:38.361341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:10:40.803 [2024-12-06 11:53:38.361464] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:10:40.803 [2024-12-06 11:53:38.361481] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:10:40.803 [2024-12-06 11:53:38.361600] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.803 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.803 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:40.803 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.803 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.803 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.803 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.803 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:40.803 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.803 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.803 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.803 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.803 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.803 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.803 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.803 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.803 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.803 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.803 "name": "raid_bdev1", 00:10:40.803 "uuid": "e4b95267-3d31-4399-b917-591529b01380", 00:10:40.803 "strip_size_kb": 0, 00:10:40.803 "state": "online", 00:10:40.803 "raid_level": "raid1", 00:10:40.803 "superblock": true, 00:10:40.803 "num_base_bdevs": 2, 00:10:40.803 "num_base_bdevs_discovered": 2, 00:10:40.803 "num_base_bdevs_operational": 2, 00:10:40.803 "base_bdevs_list": [ 00:10:40.803 { 00:10:40.803 "name": "BaseBdev1", 00:10:40.803 "uuid": "a47fd771-adb1-5d17-95fa-0817e197ec93", 00:10:40.803 "is_configured": true, 00:10:40.803 "data_offset": 2048, 00:10:40.803 "data_size": 63488 00:10:40.803 }, 00:10:40.803 { 00:10:40.803 "name": "BaseBdev2", 00:10:40.803 "uuid": "6d8ef632-23a4-5f86-9b52-417baa7ab7be", 00:10:40.803 "is_configured": true, 00:10:40.803 "data_offset": 2048, 00:10:40.803 "data_size": 63488 00:10:40.803 } 00:10:40.803 ] 00:10:40.803 }' 00:10:40.803 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.803 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.066 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:41.066 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.066 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:10:41.066 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.066 [2024-12-06 11:53:38.782565] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:41.066 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.331 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:10:41.332 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:10:41.332 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.332 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.332 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.332 11:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.332 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:10:41.332 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:10:41.332 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:10:41.332 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:10:41.332 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:10:41.332 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:41.332 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:10:41.332 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:41.332 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:41.332 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:41.332 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:10:41.332 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:41.332 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:41.332 11:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:10:41.332 [2024-12-06 11:53:39.033958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:41.332 /dev/nbd0 00:10:41.332 11:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:41.332 11:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:41.332 11:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:41.332 11:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:10:41.332 11:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:41.591 11:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:41.591 11:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:41.591 11:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:10:41.591 11:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:41.591 11:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:41.591 11:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:41.591 1+0 records in 00:10:41.591 1+0 records out 00:10:41.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287579 s, 14.2 MB/s 00:10:41.591 11:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:41.591 11:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:10:41.591 11:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:41.591 11:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:41.591 11:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:10:41.591 11:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:41.591 11:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:41.591 11:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:10:41.591 11:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:10:41.591 11:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:10:44.882 63488+0 records in 00:10:44.882 63488+0 records out 00:10:44.882 32505856 bytes (33 MB, 31 MiB) copied, 3.44446 s, 9.4 MB/s 00:10:44.882 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:44.882 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:44.882 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:44.882 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:44.882 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:10:44.882 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:44.882 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:45.140 [2024-12-06 11:53:42.748694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.140 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:45.140 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:45.140 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:45.140 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:45.141 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:45.141 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:45.141 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:10:45.141 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:10:45.141 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:10:45.141 11:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.141 11:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.141 [2024-12-06 11:53:42.780731] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:45.141 11:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.141 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:45.141 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.141 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.141 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.141 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.141 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:45.141 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.141 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.141 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.141 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.141 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.141 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.141 11:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.141 11:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.141 11:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.141 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.141 "name": "raid_bdev1", 00:10:45.141 "uuid": "e4b95267-3d31-4399-b917-591529b01380", 00:10:45.141 "strip_size_kb": 0, 00:10:45.141 "state": "online", 00:10:45.141 "raid_level": "raid1", 00:10:45.141 "superblock": true, 00:10:45.141 "num_base_bdevs": 2, 00:10:45.141 "num_base_bdevs_discovered": 1, 00:10:45.141 "num_base_bdevs_operational": 1, 00:10:45.141 "base_bdevs_list": [ 00:10:45.141 { 00:10:45.141 "name": null, 00:10:45.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.141 "is_configured": false, 00:10:45.141 "data_offset": 0, 00:10:45.141 "data_size": 63488 00:10:45.141 }, 00:10:45.141 { 00:10:45.141 "name": "BaseBdev2", 00:10:45.141 "uuid": "6d8ef632-23a4-5f86-9b52-417baa7ab7be", 00:10:45.141 "is_configured": true, 00:10:45.141 "data_offset": 2048, 00:10:45.141 "data_size": 63488 00:10:45.141 } 00:10:45.141 ] 00:10:45.141 }' 00:10:45.141 11:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.141 11:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.708 11:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:45.708 11:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.708 11:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.708 [2024-12-06 11:53:43.180038] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:45.708 [2024-12-06 11:53:43.184204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e280 00:10:45.708 11:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.708 11:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:10:45.708 [2024-12-06 11:53:43.186105] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:46.645 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:46.645 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:46.645 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:46.645 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:46.645 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:46.645 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.645 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.645 11:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.645 11:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.645 11:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.645 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:46.645 "name": "raid_bdev1", 00:10:46.645 "uuid": "e4b95267-3d31-4399-b917-591529b01380", 00:10:46.645 "strip_size_kb": 0, 00:10:46.645 "state": "online", 00:10:46.645 "raid_level": "raid1", 00:10:46.645 "superblock": true, 00:10:46.645 "num_base_bdevs": 2, 00:10:46.645 "num_base_bdevs_discovered": 2, 00:10:46.645 "num_base_bdevs_operational": 2, 00:10:46.645 "process": { 00:10:46.645 "type": "rebuild", 00:10:46.645 "target": "spare", 00:10:46.645 "progress": { 00:10:46.645 "blocks": 20480, 00:10:46.645 "percent": 32 00:10:46.645 } 00:10:46.645 }, 00:10:46.645 "base_bdevs_list": [ 00:10:46.645 { 00:10:46.645 "name": "spare", 00:10:46.645 "uuid": "f64dc4e1-ec84-50e6-9eee-58d7e23d9e02", 00:10:46.645 "is_configured": true, 00:10:46.645 "data_offset": 2048, 00:10:46.645 "data_size": 63488 00:10:46.645 }, 00:10:46.645 { 00:10:46.645 "name": "BaseBdev2", 00:10:46.645 "uuid": "6d8ef632-23a4-5f86-9b52-417baa7ab7be", 00:10:46.645 "is_configured": true, 00:10:46.645 "data_offset": 2048, 00:10:46.645 "data_size": 63488 00:10:46.645 } 00:10:46.645 ] 00:10:46.645 }' 00:10:46.645 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:46.645 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:46.645 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:46.645 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:46.645 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:46.645 11:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.645 11:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.645 [2024-12-06 11:53:44.346719] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:46.645 [2024-12-06 11:53:44.390426] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:46.645 [2024-12-06 11:53:44.390476] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.645 [2024-12-06 11:53:44.390494] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:46.645 [2024-12-06 11:53:44.390501] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:46.911 11:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.911 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:46.911 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.911 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.911 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:46.911 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:46.911 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:46.911 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.911 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.911 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.911 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.911 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.911 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.911 11:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.911 11:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.911 11:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.911 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.911 "name": "raid_bdev1", 00:10:46.911 "uuid": "e4b95267-3d31-4399-b917-591529b01380", 00:10:46.911 "strip_size_kb": 0, 00:10:46.911 "state": "online", 00:10:46.911 "raid_level": "raid1", 00:10:46.911 "superblock": true, 00:10:46.911 "num_base_bdevs": 2, 00:10:46.911 "num_base_bdevs_discovered": 1, 00:10:46.911 "num_base_bdevs_operational": 1, 00:10:46.911 "base_bdevs_list": [ 00:10:46.911 { 00:10:46.911 "name": null, 00:10:46.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.911 "is_configured": false, 00:10:46.911 "data_offset": 0, 00:10:46.911 "data_size": 63488 00:10:46.911 }, 00:10:46.911 { 00:10:46.911 "name": "BaseBdev2", 00:10:46.911 "uuid": "6d8ef632-23a4-5f86-9b52-417baa7ab7be", 00:10:46.911 "is_configured": true, 00:10:46.911 "data_offset": 2048, 00:10:46.911 "data_size": 63488 00:10:46.911 } 00:10:46.911 ] 00:10:46.911 }' 00:10:46.911 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.911 11:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.170 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:47.170 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:47.170 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:47.170 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:47.170 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:47.170 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.170 11:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.170 11:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.170 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.170 11:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.170 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:47.170 "name": "raid_bdev1", 00:10:47.170 "uuid": "e4b95267-3d31-4399-b917-591529b01380", 00:10:47.170 "strip_size_kb": 0, 00:10:47.170 "state": "online", 00:10:47.170 "raid_level": "raid1", 00:10:47.170 "superblock": true, 00:10:47.170 "num_base_bdevs": 2, 00:10:47.170 "num_base_bdevs_discovered": 1, 00:10:47.170 "num_base_bdevs_operational": 1, 00:10:47.170 "base_bdevs_list": [ 00:10:47.170 { 00:10:47.170 "name": null, 00:10:47.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.170 "is_configured": false, 00:10:47.170 "data_offset": 0, 00:10:47.171 "data_size": 63488 00:10:47.171 }, 00:10:47.171 { 00:10:47.171 "name": "BaseBdev2", 00:10:47.171 "uuid": "6d8ef632-23a4-5f86-9b52-417baa7ab7be", 00:10:47.171 "is_configured": true, 00:10:47.171 "data_offset": 2048, 00:10:47.171 "data_size": 63488 00:10:47.171 } 00:10:47.171 ] 00:10:47.171 }' 00:10:47.171 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:47.430 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:47.430 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:47.430 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:47.430 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:47.430 11:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.430 11:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.430 [2024-12-06 11:53:44.965853] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:47.430 [2024-12-06 11:53:44.969857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e350 00:10:47.430 11:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.430 11:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:10:47.430 [2024-12-06 11:53:44.971726] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:48.367 11:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:48.367 11:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:48.367 11:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:48.367 11:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:48.367 11:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:48.367 11:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.367 11:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.367 11:53:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.367 11:53:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.367 11:53:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.367 11:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:48.367 "name": "raid_bdev1", 00:10:48.367 "uuid": "e4b95267-3d31-4399-b917-591529b01380", 00:10:48.367 "strip_size_kb": 0, 00:10:48.367 "state": "online", 00:10:48.367 "raid_level": "raid1", 00:10:48.367 "superblock": true, 00:10:48.367 "num_base_bdevs": 2, 00:10:48.367 "num_base_bdevs_discovered": 2, 00:10:48.367 "num_base_bdevs_operational": 2, 00:10:48.367 "process": { 00:10:48.367 "type": "rebuild", 00:10:48.367 "target": "spare", 00:10:48.367 "progress": { 00:10:48.367 "blocks": 20480, 00:10:48.367 "percent": 32 00:10:48.367 } 00:10:48.367 }, 00:10:48.367 "base_bdevs_list": [ 00:10:48.367 { 00:10:48.367 "name": "spare", 00:10:48.367 "uuid": "f64dc4e1-ec84-50e6-9eee-58d7e23d9e02", 00:10:48.367 "is_configured": true, 00:10:48.367 "data_offset": 2048, 00:10:48.368 "data_size": 63488 00:10:48.368 }, 00:10:48.368 { 00:10:48.368 "name": "BaseBdev2", 00:10:48.368 "uuid": "6d8ef632-23a4-5f86-9b52-417baa7ab7be", 00:10:48.368 "is_configured": true, 00:10:48.368 "data_offset": 2048, 00:10:48.368 "data_size": 63488 00:10:48.368 } 00:10:48.368 ] 00:10:48.368 }' 00:10:48.368 11:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:48.368 11:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:48.368 11:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:48.368 11:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:48.368 11:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:10:48.368 11:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:10:48.368 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:10:48.368 11:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:10:48.368 11:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:10:48.368 11:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:10:48.368 11:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=300 00:10:48.368 11:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:48.368 11:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:48.368 11:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:48.368 11:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:48.368 11:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:48.368 11:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:48.368 11:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.368 11:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.368 11:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.368 11:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.627 11:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.627 11:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:48.627 "name": "raid_bdev1", 00:10:48.627 "uuid": "e4b95267-3d31-4399-b917-591529b01380", 00:10:48.627 "strip_size_kb": 0, 00:10:48.627 "state": "online", 00:10:48.627 "raid_level": "raid1", 00:10:48.627 "superblock": true, 00:10:48.627 "num_base_bdevs": 2, 00:10:48.627 "num_base_bdevs_discovered": 2, 00:10:48.627 "num_base_bdevs_operational": 2, 00:10:48.627 "process": { 00:10:48.627 "type": "rebuild", 00:10:48.627 "target": "spare", 00:10:48.627 "progress": { 00:10:48.627 "blocks": 22528, 00:10:48.627 "percent": 35 00:10:48.627 } 00:10:48.627 }, 00:10:48.627 "base_bdevs_list": [ 00:10:48.627 { 00:10:48.627 "name": "spare", 00:10:48.627 "uuid": "f64dc4e1-ec84-50e6-9eee-58d7e23d9e02", 00:10:48.627 "is_configured": true, 00:10:48.627 "data_offset": 2048, 00:10:48.627 "data_size": 63488 00:10:48.627 }, 00:10:48.627 { 00:10:48.627 "name": "BaseBdev2", 00:10:48.627 "uuid": "6d8ef632-23a4-5f86-9b52-417baa7ab7be", 00:10:48.627 "is_configured": true, 00:10:48.627 "data_offset": 2048, 00:10:48.627 "data_size": 63488 00:10:48.627 } 00:10:48.627 ] 00:10:48.627 }' 00:10:48.627 11:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:48.628 11:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:48.628 11:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:48.628 11:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:48.628 11:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:49.566 11:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:49.566 11:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:49.566 11:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:49.566 11:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:49.566 11:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:49.566 11:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:49.566 11:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.566 11:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.566 11:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.566 11:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.566 11:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.566 11:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:49.566 "name": "raid_bdev1", 00:10:49.566 "uuid": "e4b95267-3d31-4399-b917-591529b01380", 00:10:49.566 "strip_size_kb": 0, 00:10:49.566 "state": "online", 00:10:49.566 "raid_level": "raid1", 00:10:49.566 "superblock": true, 00:10:49.566 "num_base_bdevs": 2, 00:10:49.566 "num_base_bdevs_discovered": 2, 00:10:49.566 "num_base_bdevs_operational": 2, 00:10:49.566 "process": { 00:10:49.566 "type": "rebuild", 00:10:49.566 "target": "spare", 00:10:49.566 "progress": { 00:10:49.566 "blocks": 45056, 00:10:49.566 "percent": 70 00:10:49.566 } 00:10:49.566 }, 00:10:49.566 "base_bdevs_list": [ 00:10:49.566 { 00:10:49.566 "name": "spare", 00:10:49.566 "uuid": "f64dc4e1-ec84-50e6-9eee-58d7e23d9e02", 00:10:49.566 "is_configured": true, 00:10:49.566 "data_offset": 2048, 00:10:49.566 "data_size": 63488 00:10:49.566 }, 00:10:49.566 { 00:10:49.566 "name": "BaseBdev2", 00:10:49.566 "uuid": "6d8ef632-23a4-5f86-9b52-417baa7ab7be", 00:10:49.566 "is_configured": true, 00:10:49.566 "data_offset": 2048, 00:10:49.566 "data_size": 63488 00:10:49.566 } 00:10:49.566 ] 00:10:49.566 }' 00:10:49.566 11:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:49.826 11:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:49.826 11:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:49.826 11:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:49.826 11:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:50.395 [2024-12-06 11:53:48.081723] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:10:50.395 [2024-12-06 11:53:48.081811] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:10:50.395 [2024-12-06 11:53:48.081955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.654 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:50.654 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:50.654 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:50.654 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:50.654 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:50.654 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:50.654 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.654 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.654 11:53:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.654 11:53:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.913 11:53:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.913 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:50.913 "name": "raid_bdev1", 00:10:50.913 "uuid": "e4b95267-3d31-4399-b917-591529b01380", 00:10:50.913 "strip_size_kb": 0, 00:10:50.913 "state": "online", 00:10:50.913 "raid_level": "raid1", 00:10:50.913 "superblock": true, 00:10:50.913 "num_base_bdevs": 2, 00:10:50.913 "num_base_bdevs_discovered": 2, 00:10:50.913 "num_base_bdevs_operational": 2, 00:10:50.913 "base_bdevs_list": [ 00:10:50.913 { 00:10:50.913 "name": "spare", 00:10:50.913 "uuid": "f64dc4e1-ec84-50e6-9eee-58d7e23d9e02", 00:10:50.913 "is_configured": true, 00:10:50.913 "data_offset": 2048, 00:10:50.913 "data_size": 63488 00:10:50.913 }, 00:10:50.913 { 00:10:50.913 "name": "BaseBdev2", 00:10:50.913 "uuid": "6d8ef632-23a4-5f86-9b52-417baa7ab7be", 00:10:50.913 "is_configured": true, 00:10:50.913 "data_offset": 2048, 00:10:50.913 "data_size": 63488 00:10:50.913 } 00:10:50.913 ] 00:10:50.913 }' 00:10:50.913 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:50.913 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:10:50.913 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:50.913 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:10:50.913 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:10:50.913 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:50.913 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:50.913 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:50.914 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:50.914 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:50.914 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.914 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.914 11:53:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.914 11:53:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.914 11:53:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.914 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:50.914 "name": "raid_bdev1", 00:10:50.914 "uuid": "e4b95267-3d31-4399-b917-591529b01380", 00:10:50.914 "strip_size_kb": 0, 00:10:50.914 "state": "online", 00:10:50.914 "raid_level": "raid1", 00:10:50.914 "superblock": true, 00:10:50.914 "num_base_bdevs": 2, 00:10:50.914 "num_base_bdevs_discovered": 2, 00:10:50.914 "num_base_bdevs_operational": 2, 00:10:50.914 "base_bdevs_list": [ 00:10:50.914 { 00:10:50.914 "name": "spare", 00:10:50.914 "uuid": "f64dc4e1-ec84-50e6-9eee-58d7e23d9e02", 00:10:50.914 "is_configured": true, 00:10:50.914 "data_offset": 2048, 00:10:50.914 "data_size": 63488 00:10:50.914 }, 00:10:50.914 { 00:10:50.914 "name": "BaseBdev2", 00:10:50.914 "uuid": "6d8ef632-23a4-5f86-9b52-417baa7ab7be", 00:10:50.914 "is_configured": true, 00:10:50.914 "data_offset": 2048, 00:10:50.914 "data_size": 63488 00:10:50.914 } 00:10:50.914 ] 00:10:50.914 }' 00:10:50.914 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:50.914 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:50.914 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:51.172 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:51.172 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:51.172 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.172 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.172 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.172 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.172 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:51.172 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.172 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.172 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.172 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.172 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.172 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.172 11:53:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.172 11:53:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.172 11:53:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.172 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.172 "name": "raid_bdev1", 00:10:51.172 "uuid": "e4b95267-3d31-4399-b917-591529b01380", 00:10:51.172 "strip_size_kb": 0, 00:10:51.172 "state": "online", 00:10:51.172 "raid_level": "raid1", 00:10:51.172 "superblock": true, 00:10:51.172 "num_base_bdevs": 2, 00:10:51.172 "num_base_bdevs_discovered": 2, 00:10:51.172 "num_base_bdevs_operational": 2, 00:10:51.172 "base_bdevs_list": [ 00:10:51.172 { 00:10:51.172 "name": "spare", 00:10:51.173 "uuid": "f64dc4e1-ec84-50e6-9eee-58d7e23d9e02", 00:10:51.173 "is_configured": true, 00:10:51.173 "data_offset": 2048, 00:10:51.173 "data_size": 63488 00:10:51.173 }, 00:10:51.173 { 00:10:51.173 "name": "BaseBdev2", 00:10:51.173 "uuid": "6d8ef632-23a4-5f86-9b52-417baa7ab7be", 00:10:51.173 "is_configured": true, 00:10:51.173 "data_offset": 2048, 00:10:51.173 "data_size": 63488 00:10:51.173 } 00:10:51.173 ] 00:10:51.173 }' 00:10:51.173 11:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.173 11:53:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.431 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:51.431 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.432 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.432 [2024-12-06 11:53:49.100528] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:51.432 [2024-12-06 11:53:49.100554] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:51.432 [2024-12-06 11:53:49.100650] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:51.432 [2024-12-06 11:53:49.100726] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:51.432 [2024-12-06 11:53:49.100739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:10:51.432 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.432 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:10:51.432 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.432 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.432 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.432 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.432 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:10:51.432 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:10:51.432 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:10:51.432 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:10:51.432 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:51.432 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:10:51.432 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:51.432 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:51.432 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:51.432 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:10:51.432 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:51.432 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:51.432 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:10:51.691 /dev/nbd0 00:10:51.691 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:51.691 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:51.691 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:51.691 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:10:51.691 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:51.691 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:51.691 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:51.691 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:10:51.691 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:51.691 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:51.691 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:51.691 1+0 records in 00:10:51.691 1+0 records out 00:10:51.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419482 s, 9.8 MB/s 00:10:51.691 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:51.691 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:10:51.691 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:51.691 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:51.691 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:10:51.691 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:51.691 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:51.691 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:10:51.950 /dev/nbd1 00:10:51.950 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:51.950 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:51.950 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:10:51.950 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:10:51.950 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:51.950 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:51.950 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:10:51.950 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:10:51.950 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:51.950 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:51.950 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:51.950 1+0 records in 00:10:51.950 1+0 records out 00:10:51.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333776 s, 12.3 MB/s 00:10:51.950 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:51.950 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:10:51.950 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:51.950 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:51.950 11:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:10:51.950 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:51.950 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:51.950 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:10:52.209 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:10:52.209 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:52.209 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:52.209 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:52.209 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:10:52.209 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:52.209 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:52.209 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:52.209 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:52.209 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:52.209 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:52.209 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:52.209 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:52.209 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:10:52.209 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:10:52.209 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:52.209 11:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:10:52.469 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:52.469 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:52.469 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:52.469 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:52.469 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:52.469 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:52.469 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:10:52.469 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:10:52.469 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:10:52.469 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:10:52.469 11:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.469 11:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.469 11:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.469 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:52.469 11:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.469 11:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.469 [2024-12-06 11:53:50.169361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:52.469 [2024-12-06 11:53:50.169418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.469 [2024-12-06 11:53:50.169455] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:52.469 [2024-12-06 11:53:50.169468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.469 [2024-12-06 11:53:50.171698] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.469 [2024-12-06 11:53:50.171776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:52.469 [2024-12-06 11:53:50.171883] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:10:52.469 [2024-12-06 11:53:50.171956] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:52.469 [2024-12-06 11:53:50.172103] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:52.469 spare 00:10:52.469 11:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.469 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:10:52.469 11:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.469 11:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.729 [2024-12-06 11:53:50.272045] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:10:52.729 [2024-12-06 11:53:50.272108] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:52.729 [2024-12-06 11:53:50.272413] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cae960 00:10:52.729 [2024-12-06 11:53:50.272599] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:10:52.729 [2024-12-06 11:53:50.272648] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:10:52.729 [2024-12-06 11:53:50.272825] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.729 11:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.729 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:52.729 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.729 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.729 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.729 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.729 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:52.729 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.729 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.729 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.729 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.729 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.729 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.729 11:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.729 11:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.729 11:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.729 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.729 "name": "raid_bdev1", 00:10:52.729 "uuid": "e4b95267-3d31-4399-b917-591529b01380", 00:10:52.729 "strip_size_kb": 0, 00:10:52.729 "state": "online", 00:10:52.729 "raid_level": "raid1", 00:10:52.729 "superblock": true, 00:10:52.729 "num_base_bdevs": 2, 00:10:52.729 "num_base_bdevs_discovered": 2, 00:10:52.729 "num_base_bdevs_operational": 2, 00:10:52.729 "base_bdevs_list": [ 00:10:52.729 { 00:10:52.729 "name": "spare", 00:10:52.729 "uuid": "f64dc4e1-ec84-50e6-9eee-58d7e23d9e02", 00:10:52.729 "is_configured": true, 00:10:52.729 "data_offset": 2048, 00:10:52.729 "data_size": 63488 00:10:52.729 }, 00:10:52.729 { 00:10:52.730 "name": "BaseBdev2", 00:10:52.730 "uuid": "6d8ef632-23a4-5f86-9b52-417baa7ab7be", 00:10:52.730 "is_configured": true, 00:10:52.730 "data_offset": 2048, 00:10:52.730 "data_size": 63488 00:10:52.730 } 00:10:52.730 ] 00:10:52.730 }' 00:10:52.730 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.730 11:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.989 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:52.989 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:52.989 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:52.989 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:52.989 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:52.989 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.989 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.989 11:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.989 11:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.989 11:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:53.248 "name": "raid_bdev1", 00:10:53.248 "uuid": "e4b95267-3d31-4399-b917-591529b01380", 00:10:53.248 "strip_size_kb": 0, 00:10:53.248 "state": "online", 00:10:53.248 "raid_level": "raid1", 00:10:53.248 "superblock": true, 00:10:53.248 "num_base_bdevs": 2, 00:10:53.248 "num_base_bdevs_discovered": 2, 00:10:53.248 "num_base_bdevs_operational": 2, 00:10:53.248 "base_bdevs_list": [ 00:10:53.248 { 00:10:53.248 "name": "spare", 00:10:53.248 "uuid": "f64dc4e1-ec84-50e6-9eee-58d7e23d9e02", 00:10:53.248 "is_configured": true, 00:10:53.248 "data_offset": 2048, 00:10:53.248 "data_size": 63488 00:10:53.248 }, 00:10:53.248 { 00:10:53.248 "name": "BaseBdev2", 00:10:53.248 "uuid": "6d8ef632-23a4-5f86-9b52-417baa7ab7be", 00:10:53.248 "is_configured": true, 00:10:53.248 "data_offset": 2048, 00:10:53.248 "data_size": 63488 00:10:53.248 } 00:10:53.248 ] 00:10:53.248 }' 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.248 [2024-12-06 11:53:50.892189] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.248 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.248 "name": "raid_bdev1", 00:10:53.248 "uuid": "e4b95267-3d31-4399-b917-591529b01380", 00:10:53.248 "strip_size_kb": 0, 00:10:53.248 "state": "online", 00:10:53.248 "raid_level": "raid1", 00:10:53.248 "superblock": true, 00:10:53.248 "num_base_bdevs": 2, 00:10:53.248 "num_base_bdevs_discovered": 1, 00:10:53.248 "num_base_bdevs_operational": 1, 00:10:53.248 "base_bdevs_list": [ 00:10:53.248 { 00:10:53.248 "name": null, 00:10:53.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.248 "is_configured": false, 00:10:53.248 "data_offset": 0, 00:10:53.248 "data_size": 63488 00:10:53.248 }, 00:10:53.248 { 00:10:53.248 "name": "BaseBdev2", 00:10:53.248 "uuid": "6d8ef632-23a4-5f86-9b52-417baa7ab7be", 00:10:53.248 "is_configured": true, 00:10:53.248 "data_offset": 2048, 00:10:53.248 "data_size": 63488 00:10:53.249 } 00:10:53.249 ] 00:10:53.249 }' 00:10:53.249 11:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.249 11:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.818 11:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:53.818 11:53:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.818 11:53:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.818 [2024-12-06 11:53:51.319494] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:53.818 [2024-12-06 11:53:51.319741] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:10:53.818 [2024-12-06 11:53:51.319802] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:10:53.818 [2024-12-06 11:53:51.319862] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:53.818 [2024-12-06 11:53:51.323828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caea30 00:10:53.818 11:53:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.818 11:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:10:53.818 [2024-12-06 11:53:51.325659] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:54.757 11:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:54.757 11:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:54.757 11:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:54.757 11:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:54.757 11:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:54.757 11:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.757 11:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.757 11:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.757 11:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.757 11:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.757 11:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:54.757 "name": "raid_bdev1", 00:10:54.757 "uuid": "e4b95267-3d31-4399-b917-591529b01380", 00:10:54.757 "strip_size_kb": 0, 00:10:54.757 "state": "online", 00:10:54.757 "raid_level": "raid1", 00:10:54.757 "superblock": true, 00:10:54.757 "num_base_bdevs": 2, 00:10:54.757 "num_base_bdevs_discovered": 2, 00:10:54.757 "num_base_bdevs_operational": 2, 00:10:54.757 "process": { 00:10:54.757 "type": "rebuild", 00:10:54.757 "target": "spare", 00:10:54.757 "progress": { 00:10:54.757 "blocks": 20480, 00:10:54.757 "percent": 32 00:10:54.757 } 00:10:54.757 }, 00:10:54.757 "base_bdevs_list": [ 00:10:54.757 { 00:10:54.757 "name": "spare", 00:10:54.757 "uuid": "f64dc4e1-ec84-50e6-9eee-58d7e23d9e02", 00:10:54.757 "is_configured": true, 00:10:54.757 "data_offset": 2048, 00:10:54.757 "data_size": 63488 00:10:54.757 }, 00:10:54.757 { 00:10:54.757 "name": "BaseBdev2", 00:10:54.757 "uuid": "6d8ef632-23a4-5f86-9b52-417baa7ab7be", 00:10:54.757 "is_configured": true, 00:10:54.757 "data_offset": 2048, 00:10:54.757 "data_size": 63488 00:10:54.757 } 00:10:54.757 ] 00:10:54.757 }' 00:10:54.757 11:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:54.757 11:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:54.757 11:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:54.757 11:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:54.757 11:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:10:54.757 11:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.757 11:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.757 [2024-12-06 11:53:52.490818] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:55.016 [2024-12-06 11:53:52.529514] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:55.016 [2024-12-06 11:53:52.529565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.016 [2024-12-06 11:53:52.529581] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:55.016 [2024-12-06 11:53:52.529589] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:55.016 11:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.016 11:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:55.016 11:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.016 11:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.016 11:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.016 11:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.016 11:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:55.016 11:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.016 11:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.016 11:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.016 11:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.016 11:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.016 11:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.016 11:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.016 11:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.016 11:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.016 11:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.016 "name": "raid_bdev1", 00:10:55.016 "uuid": "e4b95267-3d31-4399-b917-591529b01380", 00:10:55.016 "strip_size_kb": 0, 00:10:55.016 "state": "online", 00:10:55.016 "raid_level": "raid1", 00:10:55.016 "superblock": true, 00:10:55.016 "num_base_bdevs": 2, 00:10:55.016 "num_base_bdevs_discovered": 1, 00:10:55.016 "num_base_bdevs_operational": 1, 00:10:55.016 "base_bdevs_list": [ 00:10:55.016 { 00:10:55.016 "name": null, 00:10:55.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.016 "is_configured": false, 00:10:55.016 "data_offset": 0, 00:10:55.016 "data_size": 63488 00:10:55.016 }, 00:10:55.016 { 00:10:55.016 "name": "BaseBdev2", 00:10:55.016 "uuid": "6d8ef632-23a4-5f86-9b52-417baa7ab7be", 00:10:55.016 "is_configured": true, 00:10:55.016 "data_offset": 2048, 00:10:55.016 "data_size": 63488 00:10:55.016 } 00:10:55.016 ] 00:10:55.016 }' 00:10:55.016 11:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.016 11:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.275 11:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:55.275 11:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.275 11:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.275 [2024-12-06 11:53:53.000822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:55.275 [2024-12-06 11:53:53.000935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.275 [2024-12-06 11:53:53.000978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:55.275 [2024-12-06 11:53:53.001019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.275 [2024-12-06 11:53:53.001516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.275 [2024-12-06 11:53:53.001569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:55.275 [2024-12-06 11:53:53.001683] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:10:55.275 [2024-12-06 11:53:53.001720] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:10:55.275 [2024-12-06 11:53:53.001771] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:10:55.275 [2024-12-06 11:53:53.001819] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:55.275 [2024-12-06 11:53:53.005772] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeb00 00:10:55.275 spare 00:10:55.275 11:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.275 11:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:10:55.275 [2024-12-06 11:53:53.007633] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:56.660 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:56.660 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:56.660 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:56.660 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:56.660 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:56.660 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.660 11:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.660 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.660 11:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.660 11:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.660 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:56.660 "name": "raid_bdev1", 00:10:56.660 "uuid": "e4b95267-3d31-4399-b917-591529b01380", 00:10:56.660 "strip_size_kb": 0, 00:10:56.660 "state": "online", 00:10:56.660 "raid_level": "raid1", 00:10:56.660 "superblock": true, 00:10:56.660 "num_base_bdevs": 2, 00:10:56.660 "num_base_bdevs_discovered": 2, 00:10:56.660 "num_base_bdevs_operational": 2, 00:10:56.660 "process": { 00:10:56.660 "type": "rebuild", 00:10:56.660 "target": "spare", 00:10:56.660 "progress": { 00:10:56.660 "blocks": 20480, 00:10:56.660 "percent": 32 00:10:56.660 } 00:10:56.660 }, 00:10:56.660 "base_bdevs_list": [ 00:10:56.660 { 00:10:56.660 "name": "spare", 00:10:56.660 "uuid": "f64dc4e1-ec84-50e6-9eee-58d7e23d9e02", 00:10:56.660 "is_configured": true, 00:10:56.660 "data_offset": 2048, 00:10:56.660 "data_size": 63488 00:10:56.660 }, 00:10:56.660 { 00:10:56.660 "name": "BaseBdev2", 00:10:56.660 "uuid": "6d8ef632-23a4-5f86-9b52-417baa7ab7be", 00:10:56.660 "is_configured": true, 00:10:56.660 "data_offset": 2048, 00:10:56.660 "data_size": 63488 00:10:56.660 } 00:10:56.660 ] 00:10:56.660 }' 00:10:56.660 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:56.660 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:56.660 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:56.660 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:56.660 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:10:56.660 11:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.660 11:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.661 [2024-12-06 11:53:54.155628] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:56.661 [2024-12-06 11:53:54.211461] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:56.661 [2024-12-06 11:53:54.211517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.661 [2024-12-06 11:53:54.211531] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:56.661 [2024-12-06 11:53:54.211541] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:56.661 11:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.661 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:56.661 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.661 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.661 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.661 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.661 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:56.661 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.661 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.661 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.661 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.661 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.661 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.661 11:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.661 11:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.661 11:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.661 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.661 "name": "raid_bdev1", 00:10:56.661 "uuid": "e4b95267-3d31-4399-b917-591529b01380", 00:10:56.661 "strip_size_kb": 0, 00:10:56.661 "state": "online", 00:10:56.661 "raid_level": "raid1", 00:10:56.661 "superblock": true, 00:10:56.661 "num_base_bdevs": 2, 00:10:56.661 "num_base_bdevs_discovered": 1, 00:10:56.661 "num_base_bdevs_operational": 1, 00:10:56.661 "base_bdevs_list": [ 00:10:56.661 { 00:10:56.661 "name": null, 00:10:56.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.661 "is_configured": false, 00:10:56.661 "data_offset": 0, 00:10:56.661 "data_size": 63488 00:10:56.661 }, 00:10:56.661 { 00:10:56.661 "name": "BaseBdev2", 00:10:56.661 "uuid": "6d8ef632-23a4-5f86-9b52-417baa7ab7be", 00:10:56.661 "is_configured": true, 00:10:56.661 "data_offset": 2048, 00:10:56.661 "data_size": 63488 00:10:56.661 } 00:10:56.661 ] 00:10:56.661 }' 00:10:56.661 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.661 11:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.941 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:56.941 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:56.941 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:56.941 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:56.941 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:56.941 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.941 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.941 11:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.941 11:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.941 11:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.941 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:56.941 "name": "raid_bdev1", 00:10:56.941 "uuid": "e4b95267-3d31-4399-b917-591529b01380", 00:10:56.941 "strip_size_kb": 0, 00:10:56.941 "state": "online", 00:10:56.941 "raid_level": "raid1", 00:10:56.941 "superblock": true, 00:10:56.941 "num_base_bdevs": 2, 00:10:56.941 "num_base_bdevs_discovered": 1, 00:10:56.941 "num_base_bdevs_operational": 1, 00:10:56.941 "base_bdevs_list": [ 00:10:56.941 { 00:10:56.941 "name": null, 00:10:56.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.941 "is_configured": false, 00:10:56.941 "data_offset": 0, 00:10:56.941 "data_size": 63488 00:10:56.941 }, 00:10:56.941 { 00:10:56.941 "name": "BaseBdev2", 00:10:56.941 "uuid": "6d8ef632-23a4-5f86-9b52-417baa7ab7be", 00:10:56.941 "is_configured": true, 00:10:56.941 "data_offset": 2048, 00:10:56.941 "data_size": 63488 00:10:56.941 } 00:10:56.941 ] 00:10:56.941 }' 00:10:56.941 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:57.207 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:57.207 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:57.207 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:57.207 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:10:57.207 11:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.207 11:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.207 11:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.207 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:57.207 11:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.207 11:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.207 [2024-12-06 11:53:54.798621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:57.207 [2024-12-06 11:53:54.798676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.207 [2024-12-06 11:53:54.798713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:57.207 [2024-12-06 11:53:54.798723] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.207 [2024-12-06 11:53:54.799120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.207 [2024-12-06 11:53:54.799143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:57.207 [2024-12-06 11:53:54.799210] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:10:57.207 [2024-12-06 11:53:54.799231] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:10:57.207 [2024-12-06 11:53:54.799248] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:10:57.207 [2024-12-06 11:53:54.799268] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:10:57.208 BaseBdev1 00:10:57.208 11:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.208 11:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:10:58.148 11:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:58.148 11:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.148 11:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.148 11:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.148 11:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.148 11:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:58.148 11:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.148 11:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.148 11:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.148 11:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.148 11:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.148 11:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.148 11:53:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.148 11:53:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.148 11:53:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.148 11:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.148 "name": "raid_bdev1", 00:10:58.148 "uuid": "e4b95267-3d31-4399-b917-591529b01380", 00:10:58.148 "strip_size_kb": 0, 00:10:58.148 "state": "online", 00:10:58.148 "raid_level": "raid1", 00:10:58.148 "superblock": true, 00:10:58.148 "num_base_bdevs": 2, 00:10:58.148 "num_base_bdevs_discovered": 1, 00:10:58.148 "num_base_bdevs_operational": 1, 00:10:58.148 "base_bdevs_list": [ 00:10:58.148 { 00:10:58.148 "name": null, 00:10:58.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.148 "is_configured": false, 00:10:58.148 "data_offset": 0, 00:10:58.148 "data_size": 63488 00:10:58.148 }, 00:10:58.148 { 00:10:58.148 "name": "BaseBdev2", 00:10:58.148 "uuid": "6d8ef632-23a4-5f86-9b52-417baa7ab7be", 00:10:58.148 "is_configured": true, 00:10:58.148 "data_offset": 2048, 00:10:58.148 "data_size": 63488 00:10:58.148 } 00:10:58.148 ] 00:10:58.148 }' 00:10:58.148 11:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.148 11:53:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.716 11:53:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:58.716 11:53:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:58.716 11:53:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:58.716 11:53:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:58.716 11:53:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:58.716 11:53:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.716 11:53:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.716 11:53:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.716 11:53:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.716 11:53:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.716 11:53:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:58.716 "name": "raid_bdev1", 00:10:58.716 "uuid": "e4b95267-3d31-4399-b917-591529b01380", 00:10:58.716 "strip_size_kb": 0, 00:10:58.716 "state": "online", 00:10:58.716 "raid_level": "raid1", 00:10:58.716 "superblock": true, 00:10:58.716 "num_base_bdevs": 2, 00:10:58.716 "num_base_bdevs_discovered": 1, 00:10:58.716 "num_base_bdevs_operational": 1, 00:10:58.716 "base_bdevs_list": [ 00:10:58.716 { 00:10:58.716 "name": null, 00:10:58.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.716 "is_configured": false, 00:10:58.716 "data_offset": 0, 00:10:58.716 "data_size": 63488 00:10:58.716 }, 00:10:58.716 { 00:10:58.716 "name": "BaseBdev2", 00:10:58.716 "uuid": "6d8ef632-23a4-5f86-9b52-417baa7ab7be", 00:10:58.716 "is_configured": true, 00:10:58.716 "data_offset": 2048, 00:10:58.716 "data_size": 63488 00:10:58.716 } 00:10:58.716 ] 00:10:58.716 }' 00:10:58.716 11:53:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:58.716 11:53:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:58.716 11:53:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:58.716 11:53:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:58.716 11:53:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:10:58.716 11:53:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:10:58.716 11:53:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:10:58.716 11:53:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:58.716 11:53:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:58.716 11:53:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:58.716 11:53:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:58.716 11:53:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:10:58.716 11:53:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.716 11:53:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.716 [2024-12-06 11:53:56.423857] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:58.716 [2024-12-06 11:53:56.424023] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:10:58.716 [2024-12-06 11:53:56.424037] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:10:58.716 request: 00:10:58.717 { 00:10:58.717 "base_bdev": "BaseBdev1", 00:10:58.717 "raid_bdev": "raid_bdev1", 00:10:58.717 "method": "bdev_raid_add_base_bdev", 00:10:58.717 "req_id": 1 00:10:58.717 } 00:10:58.717 Got JSON-RPC error response 00:10:58.717 response: 00:10:58.717 { 00:10:58.717 "code": -22, 00:10:58.717 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:10:58.717 } 00:10:58.717 11:53:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:58.717 11:53:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:10:58.717 11:53:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:58.717 11:53:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:58.717 11:53:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:58.717 11:53:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:00.095 11:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:00.095 11:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.095 11:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.095 11:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.095 11:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.095 11:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:00.095 11:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.095 11:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.095 11:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.095 11:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.095 11:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.095 11:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.095 11:53:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.095 11:53:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.095 11:53:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.095 11:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.095 "name": "raid_bdev1", 00:11:00.095 "uuid": "e4b95267-3d31-4399-b917-591529b01380", 00:11:00.095 "strip_size_kb": 0, 00:11:00.095 "state": "online", 00:11:00.095 "raid_level": "raid1", 00:11:00.095 "superblock": true, 00:11:00.095 "num_base_bdevs": 2, 00:11:00.095 "num_base_bdevs_discovered": 1, 00:11:00.095 "num_base_bdevs_operational": 1, 00:11:00.095 "base_bdevs_list": [ 00:11:00.095 { 00:11:00.095 "name": null, 00:11:00.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.095 "is_configured": false, 00:11:00.095 "data_offset": 0, 00:11:00.095 "data_size": 63488 00:11:00.095 }, 00:11:00.095 { 00:11:00.095 "name": "BaseBdev2", 00:11:00.095 "uuid": "6d8ef632-23a4-5f86-9b52-417baa7ab7be", 00:11:00.095 "is_configured": true, 00:11:00.095 "data_offset": 2048, 00:11:00.095 "data_size": 63488 00:11:00.095 } 00:11:00.095 ] 00:11:00.095 }' 00:11:00.095 11:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.095 11:53:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.355 11:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:00.355 11:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:00.355 11:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:00.355 11:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:00.355 11:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:00.355 11:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.355 11:53:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.355 11:53:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.355 11:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.355 11:53:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.355 11:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:00.355 "name": "raid_bdev1", 00:11:00.355 "uuid": "e4b95267-3d31-4399-b917-591529b01380", 00:11:00.355 "strip_size_kb": 0, 00:11:00.355 "state": "online", 00:11:00.355 "raid_level": "raid1", 00:11:00.355 "superblock": true, 00:11:00.355 "num_base_bdevs": 2, 00:11:00.355 "num_base_bdevs_discovered": 1, 00:11:00.355 "num_base_bdevs_operational": 1, 00:11:00.355 "base_bdevs_list": [ 00:11:00.355 { 00:11:00.355 "name": null, 00:11:00.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.355 "is_configured": false, 00:11:00.355 "data_offset": 0, 00:11:00.355 "data_size": 63488 00:11:00.355 }, 00:11:00.355 { 00:11:00.355 "name": "BaseBdev2", 00:11:00.355 "uuid": "6d8ef632-23a4-5f86-9b52-417baa7ab7be", 00:11:00.355 "is_configured": true, 00:11:00.355 "data_offset": 2048, 00:11:00.355 "data_size": 63488 00:11:00.355 } 00:11:00.355 ] 00:11:00.355 }' 00:11:00.355 11:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:00.355 11:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:00.355 11:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:00.355 11:53:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:00.355 11:53:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86065 00:11:00.355 11:53:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 86065 ']' 00:11:00.355 11:53:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 86065 00:11:00.355 11:53:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:00.355 11:53:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:00.355 11:53:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86065 00:11:00.355 11:53:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:00.355 11:53:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:00.355 11:53:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86065' 00:11:00.355 killing process with pid 86065 00:11:00.355 11:53:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 86065 00:11:00.355 Received shutdown signal, test time was about 60.000000 seconds 00:11:00.355 00:11:00.355 Latency(us) 00:11:00.355 [2024-12-06T11:53:58.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:00.355 [2024-12-06T11:53:58.113Z] =================================================================================================================== 00:11:00.355 [2024-12-06T11:53:58.113Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:00.355 [2024-12-06 11:53:58.041048] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:00.355 [2024-12-06 11:53:58.041175] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.355 [2024-12-06 11:53:58.041228] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:00.355 [2024-12-06 11:53:58.041249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:11:00.355 11:53:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 86065 00:11:00.355 [2024-12-06 11:53:58.072124] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:00.615 11:53:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:11:00.615 00:11:00.615 real 0m20.969s 00:11:00.615 user 0m26.289s 00:11:00.615 sys 0m3.332s 00:11:00.615 ************************************ 00:11:00.615 END TEST raid_rebuild_test_sb 00:11:00.615 ************************************ 00:11:00.615 11:53:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:00.615 11:53:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.615 11:53:58 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:11:00.615 11:53:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:00.615 11:53:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:00.615 11:53:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:00.874 ************************************ 00:11:00.874 START TEST raid_rebuild_test_io 00:11:00.874 ************************************ 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=86766 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 86766 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 86766 ']' 00:11:00.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:00.874 11:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:00.874 [2024-12-06 11:53:58.462826] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:00.874 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:00.874 Zero copy mechanism will not be used. 00:11:00.874 [2024-12-06 11:53:58.463020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86766 ] 00:11:00.874 [2024-12-06 11:53:58.606545] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.134 [2024-12-06 11:53:58.649907] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.134 [2024-12-06 11:53:58.691012] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.134 [2024-12-06 11:53:58.691121] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:01.703 BaseBdev1_malloc 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:01.703 [2024-12-06 11:53:59.300416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:01.703 [2024-12-06 11:53:59.300521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.703 [2024-12-06 11:53:59.300571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:01.703 [2024-12-06 11:53:59.300585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.703 [2024-12-06 11:53:59.302631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.703 [2024-12-06 11:53:59.302666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:01.703 BaseBdev1 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:01.703 BaseBdev2_malloc 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:01.703 [2024-12-06 11:53:59.347322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:01.703 [2024-12-06 11:53:59.347426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.703 [2024-12-06 11:53:59.347475] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:01.703 [2024-12-06 11:53:59.347499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.703 [2024-12-06 11:53:59.351883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.703 [2024-12-06 11:53:59.351939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:01.703 BaseBdev2 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:01.703 spare_malloc 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:01.703 spare_delay 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:01.703 [2024-12-06 11:53:59.389456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:01.703 [2024-12-06 11:53:59.389504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.703 [2024-12-06 11:53:59.389539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:01.703 [2024-12-06 11:53:59.389548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.703 [2024-12-06 11:53:59.391562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.703 [2024-12-06 11:53:59.391646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:01.703 spare 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:01.703 [2024-12-06 11:53:59.401479] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:01.703 [2024-12-06 11:53:59.403335] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:01.703 [2024-12-06 11:53:59.403426] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:01.703 [2024-12-06 11:53:59.403438] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:01.703 [2024-12-06 11:53:59.403675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:01.703 [2024-12-06 11:53:59.403801] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:01.703 [2024-12-06 11:53:59.403813] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:01.703 [2024-12-06 11:53:59.403943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.703 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.704 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.704 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.704 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.704 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:01.704 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.962 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.962 "name": "raid_bdev1", 00:11:01.962 "uuid": "07d225a4-ee51-4abb-b7a6-143d3d0fe238", 00:11:01.962 "strip_size_kb": 0, 00:11:01.962 "state": "online", 00:11:01.962 "raid_level": "raid1", 00:11:01.962 "superblock": false, 00:11:01.962 "num_base_bdevs": 2, 00:11:01.962 "num_base_bdevs_discovered": 2, 00:11:01.962 "num_base_bdevs_operational": 2, 00:11:01.962 "base_bdevs_list": [ 00:11:01.962 { 00:11:01.962 "name": "BaseBdev1", 00:11:01.962 "uuid": "a2bccd34-2794-5e0d-815f-066487855981", 00:11:01.962 "is_configured": true, 00:11:01.962 "data_offset": 0, 00:11:01.962 "data_size": 65536 00:11:01.962 }, 00:11:01.962 { 00:11:01.962 "name": "BaseBdev2", 00:11:01.962 "uuid": "0d28a9dd-b455-54ed-9982-d19a8d5ac965", 00:11:01.962 "is_configured": true, 00:11:01.962 "data_offset": 0, 00:11:01.962 "data_size": 65536 00:11:01.962 } 00:11:01.962 ] 00:11:01.962 }' 00:11:01.962 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.962 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:02.222 [2024-12-06 11:53:59.836962] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:02.222 [2024-12-06 11:53:59.904616] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.222 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.222 "name": "raid_bdev1", 00:11:02.222 "uuid": "07d225a4-ee51-4abb-b7a6-143d3d0fe238", 00:11:02.222 "strip_size_kb": 0, 00:11:02.222 "state": "online", 00:11:02.222 "raid_level": "raid1", 00:11:02.222 "superblock": false, 00:11:02.222 "num_base_bdevs": 2, 00:11:02.222 "num_base_bdevs_discovered": 1, 00:11:02.222 "num_base_bdevs_operational": 1, 00:11:02.222 "base_bdevs_list": [ 00:11:02.222 { 00:11:02.222 "name": null, 00:11:02.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.223 "is_configured": false, 00:11:02.223 "data_offset": 0, 00:11:02.223 "data_size": 65536 00:11:02.223 }, 00:11:02.223 { 00:11:02.223 "name": "BaseBdev2", 00:11:02.223 "uuid": "0d28a9dd-b455-54ed-9982-d19a8d5ac965", 00:11:02.223 "is_configured": true, 00:11:02.223 "data_offset": 0, 00:11:02.223 "data_size": 65536 00:11:02.223 } 00:11:02.223 ] 00:11:02.223 }' 00:11:02.223 11:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.223 11:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:02.223 [2024-12-06 11:53:59.970476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:11:02.223 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:02.223 Zero copy mechanism will not be used. 00:11:02.223 Running I/O for 60 seconds... 00:11:02.792 11:54:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:02.792 11:54:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.792 11:54:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:02.792 [2024-12-06 11:54:00.331831] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:02.792 11:54:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.792 11:54:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:02.792 [2024-12-06 11:54:00.361962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:11:02.792 [2024-12-06 11:54:00.363930] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:02.792 [2024-12-06 11:54:00.474800] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:02.792 [2024-12-06 11:54:00.475163] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:03.051 [2024-12-06 11:54:00.687117] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:03.051 [2024-12-06 11:54:00.687396] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:03.310 123.00 IOPS, 369.00 MiB/s [2024-12-06T11:54:01.068Z] [2024-12-06 11:54:01.000527] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:03.310 [2024-12-06 11:54:01.000929] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:03.570 [2024-12-06 11:54:01.123596] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:03.570 [2024-12-06 11:54:01.123875] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:03.831 11:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:03.831 11:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:03.831 11:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:03.831 11:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:03.831 11:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:03.831 11:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.831 11:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.831 11:54:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.831 11:54:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:03.831 11:54:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.831 11:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:03.831 "name": "raid_bdev1", 00:11:03.831 "uuid": "07d225a4-ee51-4abb-b7a6-143d3d0fe238", 00:11:03.831 "strip_size_kb": 0, 00:11:03.831 "state": "online", 00:11:03.831 "raid_level": "raid1", 00:11:03.831 "superblock": false, 00:11:03.831 "num_base_bdevs": 2, 00:11:03.831 "num_base_bdevs_discovered": 2, 00:11:03.831 "num_base_bdevs_operational": 2, 00:11:03.831 "process": { 00:11:03.831 "type": "rebuild", 00:11:03.831 "target": "spare", 00:11:03.831 "progress": { 00:11:03.831 "blocks": 12288, 00:11:03.831 "percent": 18 00:11:03.831 } 00:11:03.831 }, 00:11:03.831 "base_bdevs_list": [ 00:11:03.831 { 00:11:03.831 "name": "spare", 00:11:03.831 "uuid": "63539edd-c4d3-564d-988e-22d510ceec50", 00:11:03.831 "is_configured": true, 00:11:03.831 "data_offset": 0, 00:11:03.831 "data_size": 65536 00:11:03.831 }, 00:11:03.831 { 00:11:03.831 "name": "BaseBdev2", 00:11:03.831 "uuid": "0d28a9dd-b455-54ed-9982-d19a8d5ac965", 00:11:03.831 "is_configured": true, 00:11:03.831 "data_offset": 0, 00:11:03.831 "data_size": 65536 00:11:03.831 } 00:11:03.831 ] 00:11:03.831 }' 00:11:03.831 11:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:03.831 11:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:03.831 11:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:03.831 [2024-12-06 11:54:01.445705] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:03.831 11:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:03.831 11:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:03.831 11:54:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.831 11:54:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:03.831 [2024-12-06 11:54:01.488865] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:03.831 [2024-12-06 11:54:01.559187] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:04.090 [2024-12-06 11:54:01.663989] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:04.090 [2024-12-06 11:54:01.675859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.090 [2024-12-06 11:54:01.675898] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:04.090 [2024-12-06 11:54:01.675908] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:04.090 [2024-12-06 11:54:01.691543] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:11:04.091 11:54:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.091 11:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:04.091 11:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.091 11:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.091 11:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.091 11:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.091 11:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:04.091 11:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.091 11:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.091 11:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.091 11:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.091 11:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.091 11:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.091 11:54:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.091 11:54:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:04.091 11:54:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.091 11:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.091 "name": "raid_bdev1", 00:11:04.091 "uuid": "07d225a4-ee51-4abb-b7a6-143d3d0fe238", 00:11:04.091 "strip_size_kb": 0, 00:11:04.091 "state": "online", 00:11:04.091 "raid_level": "raid1", 00:11:04.091 "superblock": false, 00:11:04.091 "num_base_bdevs": 2, 00:11:04.091 "num_base_bdevs_discovered": 1, 00:11:04.091 "num_base_bdevs_operational": 1, 00:11:04.091 "base_bdevs_list": [ 00:11:04.091 { 00:11:04.091 "name": null, 00:11:04.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.091 "is_configured": false, 00:11:04.091 "data_offset": 0, 00:11:04.091 "data_size": 65536 00:11:04.091 }, 00:11:04.091 { 00:11:04.091 "name": "BaseBdev2", 00:11:04.091 "uuid": "0d28a9dd-b455-54ed-9982-d19a8d5ac965", 00:11:04.091 "is_configured": true, 00:11:04.091 "data_offset": 0, 00:11:04.091 "data_size": 65536 00:11:04.091 } 00:11:04.091 ] 00:11:04.091 }' 00:11:04.091 11:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.091 11:54:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:04.609 140.50 IOPS, 421.50 MiB/s [2024-12-06T11:54:02.367Z] 11:54:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:04.609 11:54:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:04.610 11:54:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:04.610 11:54:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:04.610 11:54:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:04.610 11:54:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.610 11:54:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.610 11:54:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.610 11:54:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:04.610 11:54:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.610 11:54:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:04.610 "name": "raid_bdev1", 00:11:04.610 "uuid": "07d225a4-ee51-4abb-b7a6-143d3d0fe238", 00:11:04.610 "strip_size_kb": 0, 00:11:04.610 "state": "online", 00:11:04.610 "raid_level": "raid1", 00:11:04.610 "superblock": false, 00:11:04.610 "num_base_bdevs": 2, 00:11:04.610 "num_base_bdevs_discovered": 1, 00:11:04.610 "num_base_bdevs_operational": 1, 00:11:04.610 "base_bdevs_list": [ 00:11:04.610 { 00:11:04.610 "name": null, 00:11:04.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.610 "is_configured": false, 00:11:04.610 "data_offset": 0, 00:11:04.610 "data_size": 65536 00:11:04.610 }, 00:11:04.610 { 00:11:04.610 "name": "BaseBdev2", 00:11:04.610 "uuid": "0d28a9dd-b455-54ed-9982-d19a8d5ac965", 00:11:04.610 "is_configured": true, 00:11:04.610 "data_offset": 0, 00:11:04.610 "data_size": 65536 00:11:04.610 } 00:11:04.610 ] 00:11:04.610 }' 00:11:04.610 11:54:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:04.610 11:54:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:04.610 11:54:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:04.610 11:54:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:04.610 11:54:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:04.610 11:54:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.610 11:54:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:04.610 [2024-12-06 11:54:02.249599] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:04.610 11:54:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.610 11:54:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:04.610 [2024-12-06 11:54:02.269954] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:11:04.610 [2024-12-06 11:54:02.271849] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:04.867 [2024-12-06 11:54:02.379077] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:04.867 [2024-12-06 11:54:02.379414] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:04.867 [2024-12-06 11:54:02.498332] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:04.867 [2024-12-06 11:54:02.498574] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:05.124 [2024-12-06 11:54:02.719020] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:05.383 159.67 IOPS, 479.00 MiB/s [2024-12-06T11:54:03.141Z] [2024-12-06 11:54:03.085638] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:05.383 [2024-12-06 11:54:03.086155] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:05.642 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:05.642 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:05.642 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:05.642 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:05.643 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:05.643 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.643 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.643 11:54:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.643 11:54:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:05.643 11:54:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.643 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:05.643 "name": "raid_bdev1", 00:11:05.643 "uuid": "07d225a4-ee51-4abb-b7a6-143d3d0fe238", 00:11:05.643 "strip_size_kb": 0, 00:11:05.643 "state": "online", 00:11:05.643 "raid_level": "raid1", 00:11:05.643 "superblock": false, 00:11:05.643 "num_base_bdevs": 2, 00:11:05.643 "num_base_bdevs_discovered": 2, 00:11:05.643 "num_base_bdevs_operational": 2, 00:11:05.643 "process": { 00:11:05.643 "type": "rebuild", 00:11:05.643 "target": "spare", 00:11:05.643 "progress": { 00:11:05.643 "blocks": 14336, 00:11:05.643 "percent": 21 00:11:05.643 } 00:11:05.643 }, 00:11:05.643 "base_bdevs_list": [ 00:11:05.643 { 00:11:05.643 "name": "spare", 00:11:05.643 "uuid": "63539edd-c4d3-564d-988e-22d510ceec50", 00:11:05.643 "is_configured": true, 00:11:05.643 "data_offset": 0, 00:11:05.643 "data_size": 65536 00:11:05.643 }, 00:11:05.643 { 00:11:05.643 "name": "BaseBdev2", 00:11:05.643 "uuid": "0d28a9dd-b455-54ed-9982-d19a8d5ac965", 00:11:05.643 "is_configured": true, 00:11:05.643 "data_offset": 0, 00:11:05.643 "data_size": 65536 00:11:05.643 } 00:11:05.643 ] 00:11:05.643 }' 00:11:05.643 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:05.643 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:05.643 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:05.643 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:05.643 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:05.643 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:05.643 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:05.643 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:05.643 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=317 00:11:05.643 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:05.643 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:05.643 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:05.643 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:05.643 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:05.643 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:05.902 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.902 11:54:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.902 11:54:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:05.902 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.902 11:54:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.902 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:05.902 "name": "raid_bdev1", 00:11:05.902 "uuid": "07d225a4-ee51-4abb-b7a6-143d3d0fe238", 00:11:05.902 "strip_size_kb": 0, 00:11:05.902 "state": "online", 00:11:05.902 "raid_level": "raid1", 00:11:05.902 "superblock": false, 00:11:05.902 "num_base_bdevs": 2, 00:11:05.902 "num_base_bdevs_discovered": 2, 00:11:05.902 "num_base_bdevs_operational": 2, 00:11:05.902 "process": { 00:11:05.902 "type": "rebuild", 00:11:05.902 "target": "spare", 00:11:05.902 "progress": { 00:11:05.902 "blocks": 18432, 00:11:05.902 "percent": 28 00:11:05.902 } 00:11:05.902 }, 00:11:05.902 "base_bdevs_list": [ 00:11:05.902 { 00:11:05.902 "name": "spare", 00:11:05.902 "uuid": "63539edd-c4d3-564d-988e-22d510ceec50", 00:11:05.902 "is_configured": true, 00:11:05.902 "data_offset": 0, 00:11:05.902 "data_size": 65536 00:11:05.902 }, 00:11:05.902 { 00:11:05.902 "name": "BaseBdev2", 00:11:05.902 "uuid": "0d28a9dd-b455-54ed-9982-d19a8d5ac965", 00:11:05.902 "is_configured": true, 00:11:05.902 "data_offset": 0, 00:11:05.902 "data_size": 65536 00:11:05.902 } 00:11:05.902 ] 00:11:05.902 }' 00:11:05.902 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:05.902 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:05.902 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:05.902 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:05.902 11:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:05.902 [2024-12-06 11:54:03.589921] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:05.902 [2024-12-06 11:54:03.590205] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:06.162 [2024-12-06 11:54:03.904784] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:11:06.162 [2024-12-06 11:54:03.905231] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:11:06.421 135.25 IOPS, 405.75 MiB/s [2024-12-06T11:54:04.179Z] [2024-12-06 11:54:04.017945] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:06.989 11:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:06.989 11:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:06.989 11:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:06.989 11:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:06.989 11:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:06.990 11:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:06.990 11:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.990 11:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.990 11:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.990 11:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:06.990 11:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.990 [2024-12-06 11:54:04.556647] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:11:06.990 11:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:06.990 "name": "raid_bdev1", 00:11:06.990 "uuid": "07d225a4-ee51-4abb-b7a6-143d3d0fe238", 00:11:06.990 "strip_size_kb": 0, 00:11:06.990 "state": "online", 00:11:06.990 "raid_level": "raid1", 00:11:06.990 "superblock": false, 00:11:06.990 "num_base_bdevs": 2, 00:11:06.990 "num_base_bdevs_discovered": 2, 00:11:06.990 "num_base_bdevs_operational": 2, 00:11:06.990 "process": { 00:11:06.990 "type": "rebuild", 00:11:06.990 "target": "spare", 00:11:06.990 "progress": { 00:11:06.990 "blocks": 36864, 00:11:06.990 "percent": 56 00:11:06.990 } 00:11:06.990 }, 00:11:06.990 "base_bdevs_list": [ 00:11:06.990 { 00:11:06.990 "name": "spare", 00:11:06.990 "uuid": "63539edd-c4d3-564d-988e-22d510ceec50", 00:11:06.990 "is_configured": true, 00:11:06.990 "data_offset": 0, 00:11:06.990 "data_size": 65536 00:11:06.990 }, 00:11:06.990 { 00:11:06.990 "name": "BaseBdev2", 00:11:06.990 "uuid": "0d28a9dd-b455-54ed-9982-d19a8d5ac965", 00:11:06.990 "is_configured": true, 00:11:06.990 "data_offset": 0, 00:11:06.990 "data_size": 65536 00:11:06.990 } 00:11:06.990 ] 00:11:06.990 }' 00:11:06.990 11:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:06.990 11:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:06.990 11:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:06.990 11:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:06.990 11:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:07.249 [2024-12-06 11:54:04.759640] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:11:07.249 [2024-12-06 11:54:04.759825] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:11:07.828 116.40 IOPS, 349.20 MiB/s [2024-12-06T11:54:05.586Z] [2024-12-06 11:54:05.442099] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:11:08.086 11:54:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:08.086 11:54:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:08.086 11:54:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:08.087 11:54:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:08.087 11:54:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:08.087 11:54:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:08.087 11:54:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.087 11:54:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.087 11:54:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.087 11:54:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:08.087 11:54:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.087 11:54:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:08.087 "name": "raid_bdev1", 00:11:08.087 "uuid": "07d225a4-ee51-4abb-b7a6-143d3d0fe238", 00:11:08.087 "strip_size_kb": 0, 00:11:08.087 "state": "online", 00:11:08.087 "raid_level": "raid1", 00:11:08.087 "superblock": false, 00:11:08.087 "num_base_bdevs": 2, 00:11:08.087 "num_base_bdevs_discovered": 2, 00:11:08.087 "num_base_bdevs_operational": 2, 00:11:08.087 "process": { 00:11:08.087 "type": "rebuild", 00:11:08.087 "target": "spare", 00:11:08.087 "progress": { 00:11:08.087 "blocks": 55296, 00:11:08.087 "percent": 84 00:11:08.087 } 00:11:08.087 }, 00:11:08.087 "base_bdevs_list": [ 00:11:08.087 { 00:11:08.087 "name": "spare", 00:11:08.087 "uuid": "63539edd-c4d3-564d-988e-22d510ceec50", 00:11:08.087 "is_configured": true, 00:11:08.087 "data_offset": 0, 00:11:08.087 "data_size": 65536 00:11:08.087 }, 00:11:08.087 { 00:11:08.087 "name": "BaseBdev2", 00:11:08.087 "uuid": "0d28a9dd-b455-54ed-9982-d19a8d5ac965", 00:11:08.087 "is_configured": true, 00:11:08.087 "data_offset": 0, 00:11:08.087 "data_size": 65536 00:11:08.087 } 00:11:08.087 ] 00:11:08.087 }' 00:11:08.087 11:54:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:08.087 11:54:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:08.087 11:54:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:08.087 11:54:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:08.087 11:54:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:08.605 103.33 IOPS, 310.00 MiB/s [2024-12-06T11:54:06.363Z] [2024-12-06 11:54:06.217224] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:08.605 [2024-12-06 11:54:06.317024] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:08.605 [2024-12-06 11:54:06.318554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.174 11:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:09.174 11:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:09.174 11:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:09.174 11:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:09.174 11:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:09.175 11:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:09.175 11:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.175 11:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.175 11:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.175 11:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.175 11:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.175 11:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:09.175 "name": "raid_bdev1", 00:11:09.175 "uuid": "07d225a4-ee51-4abb-b7a6-143d3d0fe238", 00:11:09.175 "strip_size_kb": 0, 00:11:09.175 "state": "online", 00:11:09.175 "raid_level": "raid1", 00:11:09.175 "superblock": false, 00:11:09.175 "num_base_bdevs": 2, 00:11:09.175 "num_base_bdevs_discovered": 2, 00:11:09.175 "num_base_bdevs_operational": 2, 00:11:09.175 "base_bdevs_list": [ 00:11:09.175 { 00:11:09.175 "name": "spare", 00:11:09.175 "uuid": "63539edd-c4d3-564d-988e-22d510ceec50", 00:11:09.175 "is_configured": true, 00:11:09.175 "data_offset": 0, 00:11:09.175 "data_size": 65536 00:11:09.175 }, 00:11:09.175 { 00:11:09.175 "name": "BaseBdev2", 00:11:09.175 "uuid": "0d28a9dd-b455-54ed-9982-d19a8d5ac965", 00:11:09.175 "is_configured": true, 00:11:09.175 "data_offset": 0, 00:11:09.175 "data_size": 65536 00:11:09.175 } 00:11:09.175 ] 00:11:09.175 }' 00:11:09.175 11:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:09.175 11:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:09.175 11:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:09.434 11:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:09.434 11:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:11:09.434 11:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:09.434 11:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:09.434 11:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:09.434 11:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:09.434 11:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:09.434 11:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.434 11:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.434 11:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.434 11:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.434 11:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.434 11:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:09.434 "name": "raid_bdev1", 00:11:09.434 "uuid": "07d225a4-ee51-4abb-b7a6-143d3d0fe238", 00:11:09.434 "strip_size_kb": 0, 00:11:09.434 "state": "online", 00:11:09.434 "raid_level": "raid1", 00:11:09.434 "superblock": false, 00:11:09.434 "num_base_bdevs": 2, 00:11:09.434 "num_base_bdevs_discovered": 2, 00:11:09.434 "num_base_bdevs_operational": 2, 00:11:09.434 "base_bdevs_list": [ 00:11:09.435 { 00:11:09.435 "name": "spare", 00:11:09.435 "uuid": "63539edd-c4d3-564d-988e-22d510ceec50", 00:11:09.435 "is_configured": true, 00:11:09.435 "data_offset": 0, 00:11:09.435 "data_size": 65536 00:11:09.435 }, 00:11:09.435 { 00:11:09.435 "name": "BaseBdev2", 00:11:09.435 "uuid": "0d28a9dd-b455-54ed-9982-d19a8d5ac965", 00:11:09.435 "is_configured": true, 00:11:09.435 "data_offset": 0, 00:11:09.435 "data_size": 65536 00:11:09.435 } 00:11:09.435 ] 00:11:09.435 }' 00:11:09.435 11:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:09.435 93.71 IOPS, 281.14 MiB/s [2024-12-06T11:54:07.193Z] 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:09.435 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:09.435 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:09.435 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:09.435 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.435 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.435 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.435 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.435 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:09.435 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.435 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.435 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.435 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.435 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.435 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.435 11:54:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.435 11:54:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.435 11:54:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.435 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.435 "name": "raid_bdev1", 00:11:09.435 "uuid": "07d225a4-ee51-4abb-b7a6-143d3d0fe238", 00:11:09.435 "strip_size_kb": 0, 00:11:09.435 "state": "online", 00:11:09.435 "raid_level": "raid1", 00:11:09.435 "superblock": false, 00:11:09.435 "num_base_bdevs": 2, 00:11:09.435 "num_base_bdevs_discovered": 2, 00:11:09.435 "num_base_bdevs_operational": 2, 00:11:09.435 "base_bdevs_list": [ 00:11:09.435 { 00:11:09.435 "name": "spare", 00:11:09.435 "uuid": "63539edd-c4d3-564d-988e-22d510ceec50", 00:11:09.435 "is_configured": true, 00:11:09.435 "data_offset": 0, 00:11:09.435 "data_size": 65536 00:11:09.435 }, 00:11:09.435 { 00:11:09.435 "name": "BaseBdev2", 00:11:09.435 "uuid": "0d28a9dd-b455-54ed-9982-d19a8d5ac965", 00:11:09.435 "is_configured": true, 00:11:09.435 "data_offset": 0, 00:11:09.435 "data_size": 65536 00:11:09.435 } 00:11:09.435 ] 00:11:09.435 }' 00:11:09.435 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.435 11:54:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.694 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:09.694 11:54:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.694 11:54:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.694 [2024-12-06 11:54:07.428035] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:09.694 [2024-12-06 11:54:07.428121] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:10.001 00:11:10.001 Latency(us) 00:11:10.001 [2024-12-06T11:54:07.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:10.001 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:10.001 raid_bdev1 : 7.53 89.60 268.80 0.00 0.00 14864.98 273.66 112641.79 00:11:10.001 [2024-12-06T11:54:07.759Z] =================================================================================================================== 00:11:10.001 [2024-12-06T11:54:07.759Z] Total : 89.60 268.80 0.00 0.00 14864.98 273.66 112641.79 00:11:10.001 [2024-12-06 11:54:07.494964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.001 [2024-12-06 11:54:07.495035] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.001 [2024-12-06 11:54:07.495135] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.001 [2024-12-06 11:54:07.495180] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:10.001 { 00:11:10.001 "results": [ 00:11:10.001 { 00:11:10.001 "job": "raid_bdev1", 00:11:10.001 "core_mask": "0x1", 00:11:10.001 "workload": "randrw", 00:11:10.001 "percentage": 50, 00:11:10.001 "status": "finished", 00:11:10.001 "queue_depth": 2, 00:11:10.001 "io_size": 3145728, 00:11:10.001 "runtime": 7.53352, 00:11:10.001 "iops": 89.59954974567002, 00:11:10.001 "mibps": 268.79864923701007, 00:11:10.001 "io_failed": 0, 00:11:10.001 "io_timeout": 0, 00:11:10.001 "avg_latency_us": 14864.976903121462, 00:11:10.001 "min_latency_us": 273.6628820960699, 00:11:10.001 "max_latency_us": 112641.78864628822 00:11:10.001 } 00:11:10.001 ], 00:11:10.001 "core_count": 1 00:11:10.001 } 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:10.002 /dev/nbd0 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:10.002 1+0 records in 00:11:10.002 1+0 records out 00:11:10.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000543218 s, 7.5 MB/s 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:11:10.002 11:54:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:10.262 11:54:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:10.262 11:54:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:11:10.262 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:10.262 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:10.262 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:10.262 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:11:10.262 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:11:10.262 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:10.262 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:11:10.262 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:10.262 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:10.262 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:10.262 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:10.262 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:10.262 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:10.262 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:11:10.262 /dev/nbd1 00:11:10.262 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:10.262 11:54:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:10.262 11:54:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:10.262 11:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:11:10.262 11:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:10.262 11:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:10.262 11:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:10.262 11:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:11:10.262 11:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:10.262 11:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:10.262 11:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:10.262 1+0 records in 00:11:10.262 1+0 records out 00:11:10.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230925 s, 17.7 MB/s 00:11:10.262 11:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:10.522 11:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:11:10.522 11:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:10.522 11:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:10.522 11:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:11:10.522 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:10.522 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:10.522 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:10.522 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:10.522 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:10.522 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:10.522 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:10.522 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:10.522 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:10.522 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 86766 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 86766 ']' 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 86766 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86766 00:11:10.781 killing process with pid 86766 00:11:10.781 Received shutdown signal, test time was about 8.565024 seconds 00:11:10.781 00:11:10.781 Latency(us) 00:11:10.781 [2024-12-06T11:54:08.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:10.781 [2024-12-06T11:54:08.539Z] =================================================================================================================== 00:11:10.781 [2024-12-06T11:54:08.539Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86766' 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 86766 00:11:10.781 [2024-12-06 11:54:08.521363] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:10.781 11:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 86766 00:11:11.046 [2024-12-06 11:54:08.546910] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:11.046 11:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:11.046 00:11:11.046 real 0m10.398s 00:11:11.046 user 0m13.231s 00:11:11.046 sys 0m1.307s 00:11:11.046 11:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:11.046 11:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:11.046 ************************************ 00:11:11.046 END TEST raid_rebuild_test_io 00:11:11.046 ************************************ 00:11:11.321 11:54:08 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:11:11.321 11:54:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:11.321 11:54:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:11.321 11:54:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:11.321 ************************************ 00:11:11.321 START TEST raid_rebuild_test_sb_io 00:11:11.321 ************************************ 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87133 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87133 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 87133 ']' 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:11.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:11.321 11:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:11.321 [2024-12-06 11:54:08.932283] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:11.321 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:11.321 Zero copy mechanism will not be used. 00:11:11.322 [2024-12-06 11:54:08.932488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87133 ] 00:11:11.596 [2024-12-06 11:54:09.076841] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.596 [2024-12-06 11:54:09.120408] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.596 [2024-12-06 11:54:09.162122] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.596 [2024-12-06 11:54:09.162156] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.164 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:12.164 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:11:12.164 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:12.164 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:12.164 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.164 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.164 BaseBdev1_malloc 00:11:12.164 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.164 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:12.164 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.164 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.164 [2024-12-06 11:54:09.739502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:12.164 [2024-12-06 11:54:09.739566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.164 [2024-12-06 11:54:09.739588] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:12.164 [2024-12-06 11:54:09.739602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.164 [2024-12-06 11:54:09.741637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.164 [2024-12-06 11:54:09.741676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:12.164 BaseBdev1 00:11:12.164 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.164 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:12.164 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:12.164 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.164 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.164 BaseBdev2_malloc 00:11:12.164 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.165 [2024-12-06 11:54:09.778135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:12.165 [2024-12-06 11:54:09.778297] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.165 [2024-12-06 11:54:09.778339] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:12.165 [2024-12-06 11:54:09.778360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.165 [2024-12-06 11:54:09.782883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.165 [2024-12-06 11:54:09.782952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:12.165 BaseBdev2 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.165 spare_malloc 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.165 spare_delay 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.165 [2024-12-06 11:54:09.820715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:12.165 [2024-12-06 11:54:09.820767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.165 [2024-12-06 11:54:09.820806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:12.165 [2024-12-06 11:54:09.820814] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.165 [2024-12-06 11:54:09.822829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.165 [2024-12-06 11:54:09.822908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:12.165 spare 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.165 [2024-12-06 11:54:09.832754] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:12.165 [2024-12-06 11:54:09.834558] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.165 [2024-12-06 11:54:09.834708] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:12.165 [2024-12-06 11:54:09.834720] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:12.165 [2024-12-06 11:54:09.834963] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:12.165 [2024-12-06 11:54:09.835107] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:12.165 [2024-12-06 11:54:09.835118] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:12.165 [2024-12-06 11:54:09.835228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.165 "name": "raid_bdev1", 00:11:12.165 "uuid": "2c19ebd8-7292-4379-bdc7-313821f4b31c", 00:11:12.165 "strip_size_kb": 0, 00:11:12.165 "state": "online", 00:11:12.165 "raid_level": "raid1", 00:11:12.165 "superblock": true, 00:11:12.165 "num_base_bdevs": 2, 00:11:12.165 "num_base_bdevs_discovered": 2, 00:11:12.165 "num_base_bdevs_operational": 2, 00:11:12.165 "base_bdevs_list": [ 00:11:12.165 { 00:11:12.165 "name": "BaseBdev1", 00:11:12.165 "uuid": "bd675bb6-e596-55fe-a650-a6b2b7afe07a", 00:11:12.165 "is_configured": true, 00:11:12.165 "data_offset": 2048, 00:11:12.165 "data_size": 63488 00:11:12.165 }, 00:11:12.165 { 00:11:12.165 "name": "BaseBdev2", 00:11:12.165 "uuid": "6256e441-b056-583c-9f03-f5a56519c904", 00:11:12.165 "is_configured": true, 00:11:12.165 "data_offset": 2048, 00:11:12.165 "data_size": 63488 00:11:12.165 } 00:11:12.165 ] 00:11:12.165 }' 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.165 11:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.734 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:12.734 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.734 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.734 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:12.734 [2024-12-06 11:54:10.268248] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:12.734 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.734 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:12.734 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.734 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.734 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.735 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:12.735 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.735 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:12.735 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:12.735 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:12.735 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:12.735 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.735 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.735 [2024-12-06 11:54:10.363821] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:12.735 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.735 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:12.735 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.735 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.735 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.735 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.735 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:12.735 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.735 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.735 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.735 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.735 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.735 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.735 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.735 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.735 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.735 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.735 "name": "raid_bdev1", 00:11:12.735 "uuid": "2c19ebd8-7292-4379-bdc7-313821f4b31c", 00:11:12.735 "strip_size_kb": 0, 00:11:12.735 "state": "online", 00:11:12.735 "raid_level": "raid1", 00:11:12.735 "superblock": true, 00:11:12.735 "num_base_bdevs": 2, 00:11:12.735 "num_base_bdevs_discovered": 1, 00:11:12.735 "num_base_bdevs_operational": 1, 00:11:12.735 "base_bdevs_list": [ 00:11:12.735 { 00:11:12.735 "name": null, 00:11:12.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.735 "is_configured": false, 00:11:12.735 "data_offset": 0, 00:11:12.735 "data_size": 63488 00:11:12.735 }, 00:11:12.735 { 00:11:12.735 "name": "BaseBdev2", 00:11:12.735 "uuid": "6256e441-b056-583c-9f03-f5a56519c904", 00:11:12.735 "is_configured": true, 00:11:12.735 "data_offset": 2048, 00:11:12.735 "data_size": 63488 00:11:12.735 } 00:11:12.735 ] 00:11:12.735 }' 00:11:12.735 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.735 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.735 [2024-12-06 11:54:10.449672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:11:12.735 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:12.735 Zero copy mechanism will not be used. 00:11:12.735 Running I/O for 60 seconds... 00:11:12.995 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:12.995 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.995 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.995 [2024-12-06 11:54:10.723800] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:13.254 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.254 11:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:13.254 [2024-12-06 11:54:10.770217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:11:13.254 [2024-12-06 11:54:10.772176] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:13.254 [2024-12-06 11:54:10.884482] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:13.254 [2024-12-06 11:54:10.885018] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:13.514 [2024-12-06 11:54:11.098458] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:13.514 [2024-12-06 11:54:11.098821] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:13.773 [2024-12-06 11:54:11.436299] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:14.032 182.00 IOPS, 546.00 MiB/s [2024-12-06T11:54:11.790Z] [2024-12-06 11:54:11.643436] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:14.032 [2024-12-06 11:54:11.643723] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:14.032 11:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:14.032 11:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:14.032 11:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:14.032 11:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:14.032 11:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:14.032 11:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.032 11:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.032 11:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.032 11:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:14.032 11:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.292 11:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:14.292 "name": "raid_bdev1", 00:11:14.292 "uuid": "2c19ebd8-7292-4379-bdc7-313821f4b31c", 00:11:14.292 "strip_size_kb": 0, 00:11:14.292 "state": "online", 00:11:14.292 "raid_level": "raid1", 00:11:14.292 "superblock": true, 00:11:14.292 "num_base_bdevs": 2, 00:11:14.292 "num_base_bdevs_discovered": 2, 00:11:14.292 "num_base_bdevs_operational": 2, 00:11:14.292 "process": { 00:11:14.292 "type": "rebuild", 00:11:14.292 "target": "spare", 00:11:14.292 "progress": { 00:11:14.292 "blocks": 12288, 00:11:14.292 "percent": 19 00:11:14.292 } 00:11:14.292 }, 00:11:14.292 "base_bdevs_list": [ 00:11:14.292 { 00:11:14.292 "name": "spare", 00:11:14.292 "uuid": "9e80f802-a9d5-599e-9092-bd464b291ba1", 00:11:14.292 "is_configured": true, 00:11:14.292 "data_offset": 2048, 00:11:14.292 "data_size": 63488 00:11:14.292 }, 00:11:14.292 { 00:11:14.292 "name": "BaseBdev2", 00:11:14.292 "uuid": "6256e441-b056-583c-9f03-f5a56519c904", 00:11:14.292 "is_configured": true, 00:11:14.292 "data_offset": 2048, 00:11:14.292 "data_size": 63488 00:11:14.292 } 00:11:14.292 ] 00:11:14.292 }' 00:11:14.292 11:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:14.292 11:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:14.292 11:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:14.292 [2024-12-06 11:54:11.876140] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:14.292 11:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:14.292 11:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:14.292 11:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.292 11:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:14.292 [2024-12-06 11:54:11.916209] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:14.292 [2024-12-06 11:54:11.989413] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:14.553 [2024-12-06 11:54:12.106996] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:14.553 [2024-12-06 11:54:12.114475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.553 [2024-12-06 11:54:12.114507] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:14.553 [2024-12-06 11:54:12.114532] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:14.553 [2024-12-06 11:54:12.131412] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:11:14.553 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.553 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:14.553 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.553 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.553 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.553 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.553 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:14.553 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.553 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.553 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.553 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.553 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.553 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.553 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.553 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:14.553 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.553 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.553 "name": "raid_bdev1", 00:11:14.553 "uuid": "2c19ebd8-7292-4379-bdc7-313821f4b31c", 00:11:14.553 "strip_size_kb": 0, 00:11:14.553 "state": "online", 00:11:14.553 "raid_level": "raid1", 00:11:14.553 "superblock": true, 00:11:14.553 "num_base_bdevs": 2, 00:11:14.553 "num_base_bdevs_discovered": 1, 00:11:14.553 "num_base_bdevs_operational": 1, 00:11:14.553 "base_bdevs_list": [ 00:11:14.553 { 00:11:14.553 "name": null, 00:11:14.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.553 "is_configured": false, 00:11:14.553 "data_offset": 0, 00:11:14.553 "data_size": 63488 00:11:14.553 }, 00:11:14.553 { 00:11:14.553 "name": "BaseBdev2", 00:11:14.553 "uuid": "6256e441-b056-583c-9f03-f5a56519c904", 00:11:14.553 "is_configured": true, 00:11:14.553 "data_offset": 2048, 00:11:14.553 "data_size": 63488 00:11:14.553 } 00:11:14.553 ] 00:11:14.553 }' 00:11:14.553 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.553 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:15.075 157.50 IOPS, 472.50 MiB/s [2024-12-06T11:54:12.833Z] 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:15.075 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:15.075 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:15.075 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:15.075 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:15.075 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.075 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.075 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.075 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:15.075 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.075 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:15.075 "name": "raid_bdev1", 00:11:15.075 "uuid": "2c19ebd8-7292-4379-bdc7-313821f4b31c", 00:11:15.075 "strip_size_kb": 0, 00:11:15.075 "state": "online", 00:11:15.075 "raid_level": "raid1", 00:11:15.075 "superblock": true, 00:11:15.075 "num_base_bdevs": 2, 00:11:15.075 "num_base_bdevs_discovered": 1, 00:11:15.075 "num_base_bdevs_operational": 1, 00:11:15.075 "base_bdevs_list": [ 00:11:15.075 { 00:11:15.075 "name": null, 00:11:15.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.075 "is_configured": false, 00:11:15.075 "data_offset": 0, 00:11:15.075 "data_size": 63488 00:11:15.075 }, 00:11:15.075 { 00:11:15.075 "name": "BaseBdev2", 00:11:15.075 "uuid": "6256e441-b056-583c-9f03-f5a56519c904", 00:11:15.075 "is_configured": true, 00:11:15.076 "data_offset": 2048, 00:11:15.076 "data_size": 63488 00:11:15.076 } 00:11:15.076 ] 00:11:15.076 }' 00:11:15.076 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:15.076 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:15.076 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:15.076 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:15.076 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:15.076 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.076 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:15.076 [2024-12-06 11:54:12.711589] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:15.076 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.076 11:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:15.076 [2024-12-06 11:54:12.752476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:11:15.076 [2024-12-06 11:54:12.754387] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:15.336 [2024-12-06 11:54:12.865512] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:15.336 [2024-12-06 11:54:12.865851] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:15.336 [2024-12-06 11:54:13.072838] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:15.336 [2024-12-06 11:54:13.073075] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:15.597 [2024-12-06 11:54:13.311115] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:15.858 [2024-12-06 11:54:13.423942] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:16.118 151.33 IOPS, 454.00 MiB/s [2024-12-06T11:54:13.876Z] 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:16.118 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:16.118 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:16.118 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:16.118 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:16.118 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.118 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.118 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.118 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:16.118 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.118 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:16.118 "name": "raid_bdev1", 00:11:16.118 "uuid": "2c19ebd8-7292-4379-bdc7-313821f4b31c", 00:11:16.118 "strip_size_kb": 0, 00:11:16.118 "state": "online", 00:11:16.118 "raid_level": "raid1", 00:11:16.118 "superblock": true, 00:11:16.118 "num_base_bdevs": 2, 00:11:16.118 "num_base_bdevs_discovered": 2, 00:11:16.118 "num_base_bdevs_operational": 2, 00:11:16.118 "process": { 00:11:16.118 "type": "rebuild", 00:11:16.118 "target": "spare", 00:11:16.118 "progress": { 00:11:16.118 "blocks": 12288, 00:11:16.118 "percent": 19 00:11:16.118 } 00:11:16.118 }, 00:11:16.118 "base_bdevs_list": [ 00:11:16.118 { 00:11:16.118 "name": "spare", 00:11:16.118 "uuid": "9e80f802-a9d5-599e-9092-bd464b291ba1", 00:11:16.118 "is_configured": true, 00:11:16.118 "data_offset": 2048, 00:11:16.118 "data_size": 63488 00:11:16.118 }, 00:11:16.118 { 00:11:16.118 "name": "BaseBdev2", 00:11:16.118 "uuid": "6256e441-b056-583c-9f03-f5a56519c904", 00:11:16.119 "is_configured": true, 00:11:16.119 "data_offset": 2048, 00:11:16.119 "data_size": 63488 00:11:16.119 } 00:11:16.119 ] 00:11:16.119 }' 00:11:16.119 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:16.119 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:16.119 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:16.119 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:16.119 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:16.119 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:16.119 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:16.119 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:16.119 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:16.119 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:16.119 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=327 00:11:16.119 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:16.119 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:16.119 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:16.119 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:16.119 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:16.119 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:16.119 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.119 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.119 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.119 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:16.380 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.380 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:16.380 "name": "raid_bdev1", 00:11:16.380 "uuid": "2c19ebd8-7292-4379-bdc7-313821f4b31c", 00:11:16.380 "strip_size_kb": 0, 00:11:16.380 "state": "online", 00:11:16.380 "raid_level": "raid1", 00:11:16.380 "superblock": true, 00:11:16.380 "num_base_bdevs": 2, 00:11:16.380 "num_base_bdevs_discovered": 2, 00:11:16.380 "num_base_bdevs_operational": 2, 00:11:16.380 "process": { 00:11:16.380 "type": "rebuild", 00:11:16.380 "target": "spare", 00:11:16.380 "progress": { 00:11:16.380 "blocks": 16384, 00:11:16.380 "percent": 25 00:11:16.380 } 00:11:16.380 }, 00:11:16.380 "base_bdevs_list": [ 00:11:16.380 { 00:11:16.380 "name": "spare", 00:11:16.380 "uuid": "9e80f802-a9d5-599e-9092-bd464b291ba1", 00:11:16.380 "is_configured": true, 00:11:16.380 "data_offset": 2048, 00:11:16.380 "data_size": 63488 00:11:16.380 }, 00:11:16.380 { 00:11:16.380 "name": "BaseBdev2", 00:11:16.380 "uuid": "6256e441-b056-583c-9f03-f5a56519c904", 00:11:16.380 "is_configured": true, 00:11:16.380 "data_offset": 2048, 00:11:16.380 "data_size": 63488 00:11:16.380 } 00:11:16.380 ] 00:11:16.380 }' 00:11:16.380 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:16.380 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:16.380 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:16.380 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:16.380 11:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:16.380 [2024-12-06 11:54:14.107981] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:16.380 [2024-12-06 11:54:14.108416] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:16.640 [2024-12-06 11:54:14.327038] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:16.900 133.00 IOPS, 399.00 MiB/s [2024-12-06T11:54:14.658Z] [2024-12-06 11:54:14.536246] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:11:17.160 [2024-12-06 11:54:14.737141] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:17.160 [2024-12-06 11:54:14.737301] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:17.421 11:54:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:17.421 11:54:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:17.421 11:54:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:17.421 11:54:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:17.421 11:54:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:17.421 11:54:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:17.421 11:54:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.421 11:54:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.421 11:54:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.421 11:54:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:17.421 11:54:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.421 11:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:17.421 "name": "raid_bdev1", 00:11:17.421 "uuid": "2c19ebd8-7292-4379-bdc7-313821f4b31c", 00:11:17.421 "strip_size_kb": 0, 00:11:17.421 "state": "online", 00:11:17.421 "raid_level": "raid1", 00:11:17.421 "superblock": true, 00:11:17.421 "num_base_bdevs": 2, 00:11:17.421 "num_base_bdevs_discovered": 2, 00:11:17.421 "num_base_bdevs_operational": 2, 00:11:17.421 "process": { 00:11:17.421 "type": "rebuild", 00:11:17.421 "target": "spare", 00:11:17.421 "progress": { 00:11:17.421 "blocks": 30720, 00:11:17.421 "percent": 48 00:11:17.421 } 00:11:17.421 }, 00:11:17.421 "base_bdevs_list": [ 00:11:17.421 { 00:11:17.421 "name": "spare", 00:11:17.421 "uuid": "9e80f802-a9d5-599e-9092-bd464b291ba1", 00:11:17.421 "is_configured": true, 00:11:17.421 "data_offset": 2048, 00:11:17.421 "data_size": 63488 00:11:17.421 }, 00:11:17.421 { 00:11:17.421 "name": "BaseBdev2", 00:11:17.421 "uuid": "6256e441-b056-583c-9f03-f5a56519c904", 00:11:17.421 "is_configured": true, 00:11:17.421 "data_offset": 2048, 00:11:17.421 "data_size": 63488 00:11:17.421 } 00:11:17.421 ] 00:11:17.421 }' 00:11:17.421 11:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:17.421 11:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:17.421 11:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:17.421 11:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:17.421 11:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:17.989 111.00 IOPS, 333.00 MiB/s [2024-12-06T11:54:15.747Z] [2024-12-06 11:54:15.711852] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:11:18.248 [2024-12-06 11:54:15.923904] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:11:18.507 11:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:18.507 11:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:18.507 11:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:18.507 11:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:18.507 11:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:18.507 11:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:18.507 11:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.507 11:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.507 11:54:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.507 11:54:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:18.507 11:54:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.507 11:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:18.507 "name": "raid_bdev1", 00:11:18.507 "uuid": "2c19ebd8-7292-4379-bdc7-313821f4b31c", 00:11:18.507 "strip_size_kb": 0, 00:11:18.507 "state": "online", 00:11:18.507 "raid_level": "raid1", 00:11:18.507 "superblock": true, 00:11:18.507 "num_base_bdevs": 2, 00:11:18.507 "num_base_bdevs_discovered": 2, 00:11:18.507 "num_base_bdevs_operational": 2, 00:11:18.507 "process": { 00:11:18.507 "type": "rebuild", 00:11:18.507 "target": "spare", 00:11:18.507 "progress": { 00:11:18.507 "blocks": 47104, 00:11:18.507 "percent": 74 00:11:18.507 } 00:11:18.507 }, 00:11:18.507 "base_bdevs_list": [ 00:11:18.507 { 00:11:18.507 "name": "spare", 00:11:18.507 "uuid": "9e80f802-a9d5-599e-9092-bd464b291ba1", 00:11:18.507 "is_configured": true, 00:11:18.507 "data_offset": 2048, 00:11:18.507 "data_size": 63488 00:11:18.507 }, 00:11:18.507 { 00:11:18.507 "name": "BaseBdev2", 00:11:18.507 "uuid": "6256e441-b056-583c-9f03-f5a56519c904", 00:11:18.507 "is_configured": true, 00:11:18.507 "data_offset": 2048, 00:11:18.507 "data_size": 63488 00:11:18.507 } 00:11:18.507 ] 00:11:18.507 }' 00:11:18.507 11:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:18.507 11:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:18.507 11:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:18.507 11:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:18.766 11:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:19.025 99.83 IOPS, 299.50 MiB/s [2024-12-06T11:54:16.783Z] [2024-12-06 11:54:16.574891] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:11:19.285 [2024-12-06 11:54:17.006150] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:19.545 [2024-12-06 11:54:17.105941] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:19.545 [2024-12-06 11:54:17.107288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.545 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:19.545 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:19.545 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:19.545 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:19.545 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:19.545 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:19.545 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.545 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.545 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.545 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:19.545 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.805 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:19.805 "name": "raid_bdev1", 00:11:19.805 "uuid": "2c19ebd8-7292-4379-bdc7-313821f4b31c", 00:11:19.805 "strip_size_kb": 0, 00:11:19.805 "state": "online", 00:11:19.805 "raid_level": "raid1", 00:11:19.805 "superblock": true, 00:11:19.805 "num_base_bdevs": 2, 00:11:19.805 "num_base_bdevs_discovered": 2, 00:11:19.805 "num_base_bdevs_operational": 2, 00:11:19.805 "base_bdevs_list": [ 00:11:19.805 { 00:11:19.805 "name": "spare", 00:11:19.805 "uuid": "9e80f802-a9d5-599e-9092-bd464b291ba1", 00:11:19.805 "is_configured": true, 00:11:19.805 "data_offset": 2048, 00:11:19.805 "data_size": 63488 00:11:19.805 }, 00:11:19.805 { 00:11:19.805 "name": "BaseBdev2", 00:11:19.805 "uuid": "6256e441-b056-583c-9f03-f5a56519c904", 00:11:19.805 "is_configured": true, 00:11:19.805 "data_offset": 2048, 00:11:19.805 "data_size": 63488 00:11:19.805 } 00:11:19.805 ] 00:11:19.805 }' 00:11:19.805 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:19.805 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:19.805 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:19.805 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:19.805 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:11:19.805 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:19.805 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:19.805 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:19.805 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:19.805 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:19.805 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.805 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.805 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.805 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:19.805 90.86 IOPS, 272.57 MiB/s [2024-12-06T11:54:17.563Z] 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.805 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:19.805 "name": "raid_bdev1", 00:11:19.805 "uuid": "2c19ebd8-7292-4379-bdc7-313821f4b31c", 00:11:19.805 "strip_size_kb": 0, 00:11:19.805 "state": "online", 00:11:19.805 "raid_level": "raid1", 00:11:19.805 "superblock": true, 00:11:19.805 "num_base_bdevs": 2, 00:11:19.805 "num_base_bdevs_discovered": 2, 00:11:19.805 "num_base_bdevs_operational": 2, 00:11:19.806 "base_bdevs_list": [ 00:11:19.806 { 00:11:19.806 "name": "spare", 00:11:19.806 "uuid": "9e80f802-a9d5-599e-9092-bd464b291ba1", 00:11:19.806 "is_configured": true, 00:11:19.806 "data_offset": 2048, 00:11:19.806 "data_size": 63488 00:11:19.806 }, 00:11:19.806 { 00:11:19.806 "name": "BaseBdev2", 00:11:19.806 "uuid": "6256e441-b056-583c-9f03-f5a56519c904", 00:11:19.806 "is_configured": true, 00:11:19.806 "data_offset": 2048, 00:11:19.806 "data_size": 63488 00:11:19.806 } 00:11:19.806 ] 00:11:19.806 }' 00:11:19.806 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:19.806 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:19.806 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:19.806 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:19.806 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:19.806 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.806 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.806 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.806 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.806 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:19.806 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.806 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.806 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.806 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.066 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.066 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.066 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.066 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:20.066 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.066 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.066 "name": "raid_bdev1", 00:11:20.066 "uuid": "2c19ebd8-7292-4379-bdc7-313821f4b31c", 00:11:20.066 "strip_size_kb": 0, 00:11:20.066 "state": "online", 00:11:20.066 "raid_level": "raid1", 00:11:20.066 "superblock": true, 00:11:20.066 "num_base_bdevs": 2, 00:11:20.066 "num_base_bdevs_discovered": 2, 00:11:20.066 "num_base_bdevs_operational": 2, 00:11:20.066 "base_bdevs_list": [ 00:11:20.066 { 00:11:20.066 "name": "spare", 00:11:20.066 "uuid": "9e80f802-a9d5-599e-9092-bd464b291ba1", 00:11:20.066 "is_configured": true, 00:11:20.066 "data_offset": 2048, 00:11:20.066 "data_size": 63488 00:11:20.066 }, 00:11:20.066 { 00:11:20.066 "name": "BaseBdev2", 00:11:20.066 "uuid": "6256e441-b056-583c-9f03-f5a56519c904", 00:11:20.066 "is_configured": true, 00:11:20.066 "data_offset": 2048, 00:11:20.066 "data_size": 63488 00:11:20.066 } 00:11:20.066 ] 00:11:20.066 }' 00:11:20.066 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.066 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:20.326 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:20.326 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.326 11:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:20.326 [2024-12-06 11:54:17.982004] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:20.326 [2024-12-06 11:54:17.982101] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:20.326 00:11:20.326 Latency(us) 00:11:20.326 [2024-12-06T11:54:18.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:20.326 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:20.326 raid_bdev1 : 7.64 85.72 257.17 0.00 0.00 15637.02 279.03 112641.79 00:11:20.326 [2024-12-06T11:54:18.084Z] =================================================================================================================== 00:11:20.326 [2024-12-06T11:54:18.084Z] Total : 85.72 257.17 0.00 0.00 15637.02 279.03 112641.79 00:11:20.326 [2024-12-06 11:54:18.080866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.326 [2024-12-06 11:54:18.080939] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:20.326 [2024-12-06 11:54:18.081059] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:20.326 [2024-12-06 11:54:18.081110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:20.326 { 00:11:20.326 "results": [ 00:11:20.326 { 00:11:20.326 "job": "raid_bdev1", 00:11:20.326 "core_mask": "0x1", 00:11:20.327 "workload": "randrw", 00:11:20.327 "percentage": 50, 00:11:20.327 "status": "finished", 00:11:20.327 "queue_depth": 2, 00:11:20.327 "io_size": 3145728, 00:11:20.327 "runtime": 7.640752, 00:11:20.327 "iops": 85.72454648442981, 00:11:20.327 "mibps": 257.1736394532894, 00:11:20.327 "io_failed": 0, 00:11:20.327 "io_timeout": 0, 00:11:20.327 "avg_latency_us": 15637.01625254175, 00:11:20.327 "min_latency_us": 279.0288209606987, 00:11:20.327 "max_latency_us": 112641.78864628822 00:11:20.327 } 00:11:20.327 ], 00:11:20.327 "core_count": 1 00:11:20.327 } 00:11:20.587 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.587 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.587 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:20.587 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.587 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:20.587 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.587 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:20.587 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:20.587 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:20.587 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:20.587 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:20.587 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:20.587 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:20.587 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:20.587 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:20.587 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:20.587 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:20.587 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:20.587 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:20.587 /dev/nbd0 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:20.847 1+0 records in 00:11:20.847 1+0 records out 00:11:20.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363212 s, 11.3 MB/s 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:11:20.847 /dev/nbd1 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:20.847 1+0 records in 00:11:20.847 1+0 records out 00:11:20.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473695 s, 8.6 MB/s 00:11:20.847 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:21.108 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:11:21.108 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:21.108 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:21.108 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:11:21.108 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:21.108 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:21.108 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:21.108 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:21.108 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:21.108 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:21.108 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:21.108 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:21.108 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:21.108 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:21.108 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:21.369 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:21.369 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:21.369 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:21.369 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:21.369 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:21.369 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:21.369 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:21.369 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:21.369 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:21.369 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:21.369 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:21.369 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:21.369 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:21.369 11:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:21.369 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:21.369 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:21.369 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:21.369 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:21.369 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:21.369 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:21.369 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:21.369 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:21.369 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:21.369 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:21.369 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.369 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:21.369 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.369 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:21.369 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.369 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:21.369 [2024-12-06 11:54:19.084999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:21.369 [2024-12-06 11:54:19.085098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.369 [2024-12-06 11:54:19.085149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:11:21.369 [2024-12-06 11:54:19.085182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.369 [2024-12-06 11:54:19.087313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.369 [2024-12-06 11:54:19.087397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:21.369 [2024-12-06 11:54:19.087503] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:21.369 [2024-12-06 11:54:19.087581] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:21.369 [2024-12-06 11:54:19.087721] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:21.369 spare 00:11:21.369 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.369 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:21.369 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.369 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:21.631 [2024-12-06 11:54:19.187645] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:11:21.631 [2024-12-06 11:54:19.187709] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:21.631 [2024-12-06 11:54:19.188005] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027720 00:11:21.631 [2024-12-06 11:54:19.188152] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:11:21.631 [2024-12-06 11:54:19.188192] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:11:21.631 [2024-12-06 11:54:19.188353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.631 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.631 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:21.631 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.631 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.631 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.631 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.631 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:21.631 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.631 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.631 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.631 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.631 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.631 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.631 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.631 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:21.631 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.632 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.632 "name": "raid_bdev1", 00:11:21.632 "uuid": "2c19ebd8-7292-4379-bdc7-313821f4b31c", 00:11:21.632 "strip_size_kb": 0, 00:11:21.632 "state": "online", 00:11:21.632 "raid_level": "raid1", 00:11:21.632 "superblock": true, 00:11:21.632 "num_base_bdevs": 2, 00:11:21.632 "num_base_bdevs_discovered": 2, 00:11:21.632 "num_base_bdevs_operational": 2, 00:11:21.632 "base_bdevs_list": [ 00:11:21.632 { 00:11:21.632 "name": "spare", 00:11:21.632 "uuid": "9e80f802-a9d5-599e-9092-bd464b291ba1", 00:11:21.632 "is_configured": true, 00:11:21.632 "data_offset": 2048, 00:11:21.632 "data_size": 63488 00:11:21.632 }, 00:11:21.632 { 00:11:21.632 "name": "BaseBdev2", 00:11:21.632 "uuid": "6256e441-b056-583c-9f03-f5a56519c904", 00:11:21.632 "is_configured": true, 00:11:21.632 "data_offset": 2048, 00:11:21.632 "data_size": 63488 00:11:21.632 } 00:11:21.632 ] 00:11:21.632 }' 00:11:21.632 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.632 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:21.892 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:21.892 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:21.892 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:21.892 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:21.892 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:21.892 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.892 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.892 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:21.892 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.892 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.152 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:22.152 "name": "raid_bdev1", 00:11:22.152 "uuid": "2c19ebd8-7292-4379-bdc7-313821f4b31c", 00:11:22.152 "strip_size_kb": 0, 00:11:22.152 "state": "online", 00:11:22.152 "raid_level": "raid1", 00:11:22.152 "superblock": true, 00:11:22.152 "num_base_bdevs": 2, 00:11:22.152 "num_base_bdevs_discovered": 2, 00:11:22.152 "num_base_bdevs_operational": 2, 00:11:22.152 "base_bdevs_list": [ 00:11:22.153 { 00:11:22.153 "name": "spare", 00:11:22.153 "uuid": "9e80f802-a9d5-599e-9092-bd464b291ba1", 00:11:22.153 "is_configured": true, 00:11:22.153 "data_offset": 2048, 00:11:22.153 "data_size": 63488 00:11:22.153 }, 00:11:22.153 { 00:11:22.153 "name": "BaseBdev2", 00:11:22.153 "uuid": "6256e441-b056-583c-9f03-f5a56519c904", 00:11:22.153 "is_configured": true, 00:11:22.153 "data_offset": 2048, 00:11:22.153 "data_size": 63488 00:11:22.153 } 00:11:22.153 ] 00:11:22.153 }' 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:22.153 [2024-12-06 11:54:19.795923] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.153 "name": "raid_bdev1", 00:11:22.153 "uuid": "2c19ebd8-7292-4379-bdc7-313821f4b31c", 00:11:22.153 "strip_size_kb": 0, 00:11:22.153 "state": "online", 00:11:22.153 "raid_level": "raid1", 00:11:22.153 "superblock": true, 00:11:22.153 "num_base_bdevs": 2, 00:11:22.153 "num_base_bdevs_discovered": 1, 00:11:22.153 "num_base_bdevs_operational": 1, 00:11:22.153 "base_bdevs_list": [ 00:11:22.153 { 00:11:22.153 "name": null, 00:11:22.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.153 "is_configured": false, 00:11:22.153 "data_offset": 0, 00:11:22.153 "data_size": 63488 00:11:22.153 }, 00:11:22.153 { 00:11:22.153 "name": "BaseBdev2", 00:11:22.153 "uuid": "6256e441-b056-583c-9f03-f5a56519c904", 00:11:22.153 "is_configured": true, 00:11:22.153 "data_offset": 2048, 00:11:22.153 "data_size": 63488 00:11:22.153 } 00:11:22.153 ] 00:11:22.153 }' 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.153 11:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:22.721 11:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:22.721 11:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.721 11:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:22.722 [2024-12-06 11:54:20.227349] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:22.722 [2024-12-06 11:54:20.227558] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:22.722 [2024-12-06 11:54:20.227592] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:22.722 [2024-12-06 11:54:20.227645] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:22.722 [2024-12-06 11:54:20.231998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000277f0 00:11:22.722 11:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.722 11:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:22.722 [2024-12-06 11:54:20.233885] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:23.658 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:23.658 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:23.658 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:23.658 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:23.658 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:23.658 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.658 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.658 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.658 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.658 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.658 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:23.658 "name": "raid_bdev1", 00:11:23.658 "uuid": "2c19ebd8-7292-4379-bdc7-313821f4b31c", 00:11:23.658 "strip_size_kb": 0, 00:11:23.658 "state": "online", 00:11:23.658 "raid_level": "raid1", 00:11:23.658 "superblock": true, 00:11:23.658 "num_base_bdevs": 2, 00:11:23.658 "num_base_bdevs_discovered": 2, 00:11:23.658 "num_base_bdevs_operational": 2, 00:11:23.658 "process": { 00:11:23.658 "type": "rebuild", 00:11:23.658 "target": "spare", 00:11:23.658 "progress": { 00:11:23.658 "blocks": 20480, 00:11:23.658 "percent": 32 00:11:23.658 } 00:11:23.658 }, 00:11:23.658 "base_bdevs_list": [ 00:11:23.658 { 00:11:23.658 "name": "spare", 00:11:23.658 "uuid": "9e80f802-a9d5-599e-9092-bd464b291ba1", 00:11:23.658 "is_configured": true, 00:11:23.658 "data_offset": 2048, 00:11:23.658 "data_size": 63488 00:11:23.658 }, 00:11:23.658 { 00:11:23.658 "name": "BaseBdev2", 00:11:23.658 "uuid": "6256e441-b056-583c-9f03-f5a56519c904", 00:11:23.658 "is_configured": true, 00:11:23.658 "data_offset": 2048, 00:11:23.658 "data_size": 63488 00:11:23.658 } 00:11:23.658 ] 00:11:23.658 }' 00:11:23.659 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:23.659 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:23.659 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:23.659 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:23.659 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:23.659 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.659 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.659 [2024-12-06 11:54:21.391376] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:23.917 [2024-12-06 11:54:21.437691] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:23.917 [2024-12-06 11:54:21.437742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.917 [2024-12-06 11:54:21.437761] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:23.917 [2024-12-06 11:54:21.437768] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:23.917 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.917 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:23.918 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:23.918 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.918 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.918 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.918 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:23.918 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.918 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.918 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.918 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.918 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.918 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.918 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.918 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.918 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.918 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.918 "name": "raid_bdev1", 00:11:23.918 "uuid": "2c19ebd8-7292-4379-bdc7-313821f4b31c", 00:11:23.918 "strip_size_kb": 0, 00:11:23.918 "state": "online", 00:11:23.918 "raid_level": "raid1", 00:11:23.918 "superblock": true, 00:11:23.918 "num_base_bdevs": 2, 00:11:23.918 "num_base_bdevs_discovered": 1, 00:11:23.918 "num_base_bdevs_operational": 1, 00:11:23.918 "base_bdevs_list": [ 00:11:23.918 { 00:11:23.918 "name": null, 00:11:23.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.918 "is_configured": false, 00:11:23.918 "data_offset": 0, 00:11:23.918 "data_size": 63488 00:11:23.918 }, 00:11:23.918 { 00:11:23.918 "name": "BaseBdev2", 00:11:23.918 "uuid": "6256e441-b056-583c-9f03-f5a56519c904", 00:11:23.918 "is_configured": true, 00:11:23.918 "data_offset": 2048, 00:11:23.918 "data_size": 63488 00:11:23.918 } 00:11:23.918 ] 00:11:23.918 }' 00:11:23.918 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.918 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.177 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:24.177 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.177 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.177 [2024-12-06 11:54:21.805347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:24.177 [2024-12-06 11:54:21.805462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.177 [2024-12-06 11:54:21.805508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:24.177 [2024-12-06 11:54:21.805535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.177 [2024-12-06 11:54:21.805951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.177 [2024-12-06 11:54:21.806004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:24.177 [2024-12-06 11:54:21.806111] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:24.177 [2024-12-06 11:54:21.806148] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:24.177 [2024-12-06 11:54:21.806186] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:24.177 [2024-12-06 11:54:21.806305] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:24.177 [2024-12-06 11:54:21.810147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000278c0 00:11:24.177 spare 00:11:24.177 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.177 11:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:24.177 [2024-12-06 11:54:21.811993] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:25.113 11:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:25.113 11:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:25.113 11:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:25.113 11:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:25.113 11:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:25.113 11:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.113 11:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.113 11:54:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.113 11:54:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.113 11:54:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.113 11:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:25.113 "name": "raid_bdev1", 00:11:25.113 "uuid": "2c19ebd8-7292-4379-bdc7-313821f4b31c", 00:11:25.113 "strip_size_kb": 0, 00:11:25.113 "state": "online", 00:11:25.113 "raid_level": "raid1", 00:11:25.113 "superblock": true, 00:11:25.113 "num_base_bdevs": 2, 00:11:25.113 "num_base_bdevs_discovered": 2, 00:11:25.113 "num_base_bdevs_operational": 2, 00:11:25.113 "process": { 00:11:25.113 "type": "rebuild", 00:11:25.113 "target": "spare", 00:11:25.113 "progress": { 00:11:25.113 "blocks": 20480, 00:11:25.113 "percent": 32 00:11:25.113 } 00:11:25.113 }, 00:11:25.113 "base_bdevs_list": [ 00:11:25.113 { 00:11:25.113 "name": "spare", 00:11:25.113 "uuid": "9e80f802-a9d5-599e-9092-bd464b291ba1", 00:11:25.113 "is_configured": true, 00:11:25.113 "data_offset": 2048, 00:11:25.113 "data_size": 63488 00:11:25.113 }, 00:11:25.113 { 00:11:25.113 "name": "BaseBdev2", 00:11:25.113 "uuid": "6256e441-b056-583c-9f03-f5a56519c904", 00:11:25.113 "is_configured": true, 00:11:25.113 "data_offset": 2048, 00:11:25.113 "data_size": 63488 00:11:25.113 } 00:11:25.113 ] 00:11:25.113 }' 00:11:25.113 11:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:25.402 11:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:25.402 11:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:25.402 11:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:25.402 11:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:25.402 11:54:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.402 11:54:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.402 [2024-12-06 11:54:22.976450] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:25.402 [2024-12-06 11:54:23.015861] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:25.402 [2024-12-06 11:54:23.015922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.402 [2024-12-06 11:54:23.015937] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:25.402 [2024-12-06 11:54:23.015946] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:25.402 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.402 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:25.402 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.402 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.402 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.402 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.402 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:25.402 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.402 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.402 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.402 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.402 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.402 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.402 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.402 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.402 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.402 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.402 "name": "raid_bdev1", 00:11:25.402 "uuid": "2c19ebd8-7292-4379-bdc7-313821f4b31c", 00:11:25.402 "strip_size_kb": 0, 00:11:25.402 "state": "online", 00:11:25.402 "raid_level": "raid1", 00:11:25.402 "superblock": true, 00:11:25.402 "num_base_bdevs": 2, 00:11:25.402 "num_base_bdevs_discovered": 1, 00:11:25.402 "num_base_bdevs_operational": 1, 00:11:25.402 "base_bdevs_list": [ 00:11:25.402 { 00:11:25.403 "name": null, 00:11:25.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.403 "is_configured": false, 00:11:25.403 "data_offset": 0, 00:11:25.403 "data_size": 63488 00:11:25.403 }, 00:11:25.403 { 00:11:25.403 "name": "BaseBdev2", 00:11:25.403 "uuid": "6256e441-b056-583c-9f03-f5a56519c904", 00:11:25.403 "is_configured": true, 00:11:25.403 "data_offset": 2048, 00:11:25.403 "data_size": 63488 00:11:25.403 } 00:11:25.403 ] 00:11:25.403 }' 00:11:25.403 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.403 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.673 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:25.673 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:25.673 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:25.673 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:25.673 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:25.673 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.673 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.673 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.673 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.931 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.931 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:25.931 "name": "raid_bdev1", 00:11:25.931 "uuid": "2c19ebd8-7292-4379-bdc7-313821f4b31c", 00:11:25.931 "strip_size_kb": 0, 00:11:25.931 "state": "online", 00:11:25.931 "raid_level": "raid1", 00:11:25.931 "superblock": true, 00:11:25.931 "num_base_bdevs": 2, 00:11:25.931 "num_base_bdevs_discovered": 1, 00:11:25.931 "num_base_bdevs_operational": 1, 00:11:25.931 "base_bdevs_list": [ 00:11:25.931 { 00:11:25.931 "name": null, 00:11:25.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.931 "is_configured": false, 00:11:25.931 "data_offset": 0, 00:11:25.931 "data_size": 63488 00:11:25.931 }, 00:11:25.931 { 00:11:25.931 "name": "BaseBdev2", 00:11:25.931 "uuid": "6256e441-b056-583c-9f03-f5a56519c904", 00:11:25.931 "is_configured": true, 00:11:25.931 "data_offset": 2048, 00:11:25.931 "data_size": 63488 00:11:25.931 } 00:11:25.931 ] 00:11:25.931 }' 00:11:25.931 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:25.931 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:25.931 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:25.931 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:25.931 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:25.931 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.931 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.931 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.931 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:25.931 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.931 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.931 [2024-12-06 11:54:23.563412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:25.931 [2024-12-06 11:54:23.563487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.931 [2024-12-06 11:54:23.563507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:25.931 [2024-12-06 11:54:23.563517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.931 [2024-12-06 11:54:23.563897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.931 [2024-12-06 11:54:23.563915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:25.931 [2024-12-06 11:54:23.563982] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:25.931 [2024-12-06 11:54:23.563997] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:25.931 [2024-12-06 11:54:23.564007] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:25.931 [2024-12-06 11:54:23.564017] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:25.931 BaseBdev1 00:11:25.931 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.931 11:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:26.873 11:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:26.873 11:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.873 11:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.873 11:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.873 11:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.873 11:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:26.873 11:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.873 11:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.873 11:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.873 11:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.873 11:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.873 11:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.873 11:54:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.873 11:54:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.873 11:54:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.873 11:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.873 "name": "raid_bdev1", 00:11:26.873 "uuid": "2c19ebd8-7292-4379-bdc7-313821f4b31c", 00:11:26.873 "strip_size_kb": 0, 00:11:26.873 "state": "online", 00:11:26.873 "raid_level": "raid1", 00:11:26.873 "superblock": true, 00:11:26.873 "num_base_bdevs": 2, 00:11:26.873 "num_base_bdevs_discovered": 1, 00:11:26.873 "num_base_bdevs_operational": 1, 00:11:26.873 "base_bdevs_list": [ 00:11:26.873 { 00:11:26.873 "name": null, 00:11:26.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.873 "is_configured": false, 00:11:26.873 "data_offset": 0, 00:11:26.873 "data_size": 63488 00:11:26.873 }, 00:11:26.873 { 00:11:26.873 "name": "BaseBdev2", 00:11:26.873 "uuid": "6256e441-b056-583c-9f03-f5a56519c904", 00:11:26.873 "is_configured": true, 00:11:26.873 "data_offset": 2048, 00:11:26.873 "data_size": 63488 00:11:26.873 } 00:11:26.873 ] 00:11:26.873 }' 00:11:26.873 11:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.873 11:54:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:27.442 "name": "raid_bdev1", 00:11:27.442 "uuid": "2c19ebd8-7292-4379-bdc7-313821f4b31c", 00:11:27.442 "strip_size_kb": 0, 00:11:27.442 "state": "online", 00:11:27.442 "raid_level": "raid1", 00:11:27.442 "superblock": true, 00:11:27.442 "num_base_bdevs": 2, 00:11:27.442 "num_base_bdevs_discovered": 1, 00:11:27.442 "num_base_bdevs_operational": 1, 00:11:27.442 "base_bdevs_list": [ 00:11:27.442 { 00:11:27.442 "name": null, 00:11:27.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.442 "is_configured": false, 00:11:27.442 "data_offset": 0, 00:11:27.442 "data_size": 63488 00:11:27.442 }, 00:11:27.442 { 00:11:27.442 "name": "BaseBdev2", 00:11:27.442 "uuid": "6256e441-b056-583c-9f03-f5a56519c904", 00:11:27.442 "is_configured": true, 00:11:27.442 "data_offset": 2048, 00:11:27.442 "data_size": 63488 00:11:27.442 } 00:11:27.442 ] 00:11:27.442 }' 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:27.442 [2024-12-06 11:54:25.144871] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.442 [2024-12-06 11:54:25.145019] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:27.442 [2024-12-06 11:54:25.145031] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:27.442 request: 00:11:27.442 { 00:11:27.442 "base_bdev": "BaseBdev1", 00:11:27.442 "raid_bdev": "raid_bdev1", 00:11:27.442 "method": "bdev_raid_add_base_bdev", 00:11:27.442 "req_id": 1 00:11:27.442 } 00:11:27.442 Got JSON-RPC error response 00:11:27.442 response: 00:11:27.442 { 00:11:27.442 "code": -22, 00:11:27.442 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:27.442 } 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:27.442 11:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:28.819 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:28.819 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.819 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.819 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.819 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.819 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:28.819 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.819 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.819 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.819 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.819 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.819 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.819 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:28.819 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.819 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.819 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.819 "name": "raid_bdev1", 00:11:28.819 "uuid": "2c19ebd8-7292-4379-bdc7-313821f4b31c", 00:11:28.819 "strip_size_kb": 0, 00:11:28.819 "state": "online", 00:11:28.819 "raid_level": "raid1", 00:11:28.819 "superblock": true, 00:11:28.819 "num_base_bdevs": 2, 00:11:28.819 "num_base_bdevs_discovered": 1, 00:11:28.819 "num_base_bdevs_operational": 1, 00:11:28.819 "base_bdevs_list": [ 00:11:28.819 { 00:11:28.819 "name": null, 00:11:28.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.819 "is_configured": false, 00:11:28.819 "data_offset": 0, 00:11:28.819 "data_size": 63488 00:11:28.819 }, 00:11:28.819 { 00:11:28.819 "name": "BaseBdev2", 00:11:28.819 "uuid": "6256e441-b056-583c-9f03-f5a56519c904", 00:11:28.819 "is_configured": true, 00:11:28.819 "data_offset": 2048, 00:11:28.819 "data_size": 63488 00:11:28.819 } 00:11:28.819 ] 00:11:28.819 }' 00:11:28.819 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.819 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:29.077 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:29.077 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:29.077 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:29.077 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:29.077 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:29.077 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.077 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.077 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.077 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:29.077 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.077 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:29.077 "name": "raid_bdev1", 00:11:29.077 "uuid": "2c19ebd8-7292-4379-bdc7-313821f4b31c", 00:11:29.077 "strip_size_kb": 0, 00:11:29.077 "state": "online", 00:11:29.077 "raid_level": "raid1", 00:11:29.077 "superblock": true, 00:11:29.077 "num_base_bdevs": 2, 00:11:29.077 "num_base_bdevs_discovered": 1, 00:11:29.077 "num_base_bdevs_operational": 1, 00:11:29.077 "base_bdevs_list": [ 00:11:29.077 { 00:11:29.077 "name": null, 00:11:29.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.077 "is_configured": false, 00:11:29.077 "data_offset": 0, 00:11:29.077 "data_size": 63488 00:11:29.077 }, 00:11:29.077 { 00:11:29.077 "name": "BaseBdev2", 00:11:29.077 "uuid": "6256e441-b056-583c-9f03-f5a56519c904", 00:11:29.077 "is_configured": true, 00:11:29.077 "data_offset": 2048, 00:11:29.077 "data_size": 63488 00:11:29.077 } 00:11:29.077 ] 00:11:29.077 }' 00:11:29.078 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:29.078 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:29.078 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:29.078 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:29.078 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 87133 00:11:29.078 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 87133 ']' 00:11:29.078 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 87133 00:11:29.078 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:11:29.078 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:29.078 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87133 00:11:29.078 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:29.078 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:29.078 killing process with pid 87133 00:11:29.078 Received shutdown signal, test time was about 16.361867 seconds 00:11:29.078 00:11:29.078 Latency(us) 00:11:29.078 [2024-12-06T11:54:26.836Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:29.078 [2024-12-06T11:54:26.836Z] =================================================================================================================== 00:11:29.078 [2024-12-06T11:54:26.836Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:29.078 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87133' 00:11:29.078 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 87133 00:11:29.078 [2024-12-06 11:54:26.782099] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:29.078 [2024-12-06 11:54:26.782244] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.078 11:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 87133 00:11:29.078 [2024-12-06 11:54:26.782305] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.078 [2024-12-06 11:54:26.782317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:11:29.078 [2024-12-06 11:54:26.808291] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:29.336 11:54:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:29.336 00:11:29.336 real 0m18.194s 00:11:29.336 user 0m24.025s 00:11:29.336 sys 0m1.996s 00:11:29.336 ************************************ 00:11:29.336 END TEST raid_rebuild_test_sb_io 00:11:29.336 ************************************ 00:11:29.336 11:54:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:29.336 11:54:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:29.595 11:54:27 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:29.595 11:54:27 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:11:29.595 11:54:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:29.595 11:54:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:29.595 11:54:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:29.595 ************************************ 00:11:29.595 START TEST raid_rebuild_test 00:11:29.595 ************************************ 00:11:29.595 11:54:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:11:29.595 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:29.595 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:11:29.595 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:29.595 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=87799 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 87799 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 87799 ']' 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:29.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:29.596 11:54:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.596 [2024-12-06 11:54:27.200963] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:29.596 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:29.596 Zero copy mechanism will not be used. 00:11:29.596 [2024-12-06 11:54:27.201164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87799 ] 00:11:29.596 [2024-12-06 11:54:27.346669] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.855 [2024-12-06 11:54:27.392204] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.855 [2024-12-06 11:54:27.434277] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:29.855 [2024-12-06 11:54:27.434311] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.424 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:30.424 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:11:30.424 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:30.424 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:30.424 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.424 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.424 BaseBdev1_malloc 00:11:30.424 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.424 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:30.424 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.424 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.424 [2024-12-06 11:54:28.027954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:30.424 [2024-12-06 11:54:28.028025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.424 [2024-12-06 11:54:28.028061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:30.424 [2024-12-06 11:54:28.028074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.424 [2024-12-06 11:54:28.030126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.424 [2024-12-06 11:54:28.030164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:30.424 BaseBdev1 00:11:30.424 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.424 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:30.424 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:30.424 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.424 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.424 BaseBdev2_malloc 00:11:30.424 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.424 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:30.424 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.424 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.424 [2024-12-06 11:54:28.066987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:30.424 [2024-12-06 11:54:28.067112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.424 [2024-12-06 11:54:28.067165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:30.424 [2024-12-06 11:54:28.067194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.424 [2024-12-06 11:54:28.071617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.425 [2024-12-06 11:54:28.071680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:30.425 BaseBdev2 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.425 BaseBdev3_malloc 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.425 [2024-12-06 11:54:28.097334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:11:30.425 [2024-12-06 11:54:28.097386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.425 [2024-12-06 11:54:28.097411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:30.425 [2024-12-06 11:54:28.097420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.425 [2024-12-06 11:54:28.099438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.425 [2024-12-06 11:54:28.099473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:30.425 BaseBdev3 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.425 BaseBdev4_malloc 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.425 [2024-12-06 11:54:28.126403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:11:30.425 [2024-12-06 11:54:28.126504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.425 [2024-12-06 11:54:28.126531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:30.425 [2024-12-06 11:54:28.126540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.425 [2024-12-06 11:54:28.128596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.425 [2024-12-06 11:54:28.128630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:30.425 BaseBdev4 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.425 spare_malloc 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.425 spare_delay 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.425 [2024-12-06 11:54:28.166727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:30.425 [2024-12-06 11:54:28.166772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.425 [2024-12-06 11:54:28.166791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:30.425 [2024-12-06 11:54:28.166799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.425 [2024-12-06 11:54:28.168824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.425 [2024-12-06 11:54:28.168857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:30.425 spare 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.425 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.425 [2024-12-06 11:54:28.178774] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:30.685 [2024-12-06 11:54:28.180533] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:30.685 [2024-12-06 11:54:28.180594] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:30.685 [2024-12-06 11:54:28.180640] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:30.685 [2024-12-06 11:54:28.180712] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:30.685 [2024-12-06 11:54:28.180721] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:30.685 [2024-12-06 11:54:28.180978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:11:30.685 [2024-12-06 11:54:28.181101] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:30.685 [2024-12-06 11:54:28.181113] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:30.685 [2024-12-06 11:54:28.181218] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.685 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.685 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:30.685 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:30.686 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.686 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.686 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.686 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.686 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.686 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.686 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.686 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.686 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.686 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.686 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.686 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.686 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.686 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.686 "name": "raid_bdev1", 00:11:30.686 "uuid": "4a10b76f-5c8d-4bfd-a367-6b82db6714f6", 00:11:30.686 "strip_size_kb": 0, 00:11:30.686 "state": "online", 00:11:30.686 "raid_level": "raid1", 00:11:30.686 "superblock": false, 00:11:30.686 "num_base_bdevs": 4, 00:11:30.686 "num_base_bdevs_discovered": 4, 00:11:30.686 "num_base_bdevs_operational": 4, 00:11:30.686 "base_bdevs_list": [ 00:11:30.686 { 00:11:30.686 "name": "BaseBdev1", 00:11:30.686 "uuid": "62cde2a0-85ee-5efa-8579-5df89863ecba", 00:11:30.686 "is_configured": true, 00:11:30.686 "data_offset": 0, 00:11:30.686 "data_size": 65536 00:11:30.686 }, 00:11:30.686 { 00:11:30.686 "name": "BaseBdev2", 00:11:30.686 "uuid": "40380e7d-7006-5f4e-8359-2b4d3d8846bb", 00:11:30.686 "is_configured": true, 00:11:30.686 "data_offset": 0, 00:11:30.686 "data_size": 65536 00:11:30.686 }, 00:11:30.686 { 00:11:30.686 "name": "BaseBdev3", 00:11:30.686 "uuid": "3b91d2b1-dd43-5306-a9a5-4aa0ce7f5d8a", 00:11:30.686 "is_configured": true, 00:11:30.686 "data_offset": 0, 00:11:30.686 "data_size": 65536 00:11:30.686 }, 00:11:30.686 { 00:11:30.686 "name": "BaseBdev4", 00:11:30.686 "uuid": "8e26d245-5764-51e3-a383-e6273687d39b", 00:11:30.686 "is_configured": true, 00:11:30.686 "data_offset": 0, 00:11:30.686 "data_size": 65536 00:11:30.686 } 00:11:30.686 ] 00:11:30.686 }' 00:11:30.686 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.686 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.945 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:30.945 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:30.945 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.945 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.945 [2024-12-06 11:54:28.606336] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:30.945 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.945 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:30.945 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:30.945 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.945 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.945 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.945 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.945 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:30.945 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:30.945 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:30.945 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:30.945 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:30.945 11:54:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:30.945 11:54:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:30.945 11:54:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:30.945 11:54:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:30.945 11:54:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:30.946 11:54:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:30.946 11:54:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:30.946 11:54:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:30.946 11:54:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:31.205 [2024-12-06 11:54:28.853639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:11:31.205 /dev/nbd0 00:11:31.205 11:54:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:31.205 11:54:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:31.205 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:31.205 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:31.205 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:31.205 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:31.205 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:31.205 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:31.205 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:31.205 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:31.205 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:31.205 1+0 records in 00:11:31.205 1+0 records out 00:11:31.205 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489047 s, 8.4 MB/s 00:11:31.205 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.205 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:31.205 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.205 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:31.205 11:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:31.205 11:54:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:31.205 11:54:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:31.205 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:31.205 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:31.205 11:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:36.491 65536+0 records in 00:11:36.491 65536+0 records out 00:11:36.491 33554432 bytes (34 MB, 32 MiB) copied, 5.05072 s, 6.6 MB/s 00:11:36.491 11:54:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:36.491 11:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:36.491 11:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:36.491 11:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:36.491 11:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:36.491 11:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:36.491 11:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:36.491 [2024-12-06 11:54:34.156991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.491 11:54:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:36.491 11:54:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:36.491 11:54:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:36.491 11:54:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:36.491 11:54:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.491 11:54:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:36.491 11:54:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:36.491 11:54:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:36.491 11:54:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:36.491 11:54:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.491 11:54:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.492 [2024-12-06 11:54:34.193700] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:36.492 11:54:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.492 11:54:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:36.492 11:54:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.492 11:54:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.492 11:54:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.492 11:54:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.492 11:54:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.492 11:54:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.492 11:54:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.492 11:54:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.492 11:54:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.492 11:54:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.492 11:54:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.492 11:54:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.492 11:54:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.492 11:54:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.751 11:54:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.751 "name": "raid_bdev1", 00:11:36.751 "uuid": "4a10b76f-5c8d-4bfd-a367-6b82db6714f6", 00:11:36.751 "strip_size_kb": 0, 00:11:36.751 "state": "online", 00:11:36.751 "raid_level": "raid1", 00:11:36.751 "superblock": false, 00:11:36.751 "num_base_bdevs": 4, 00:11:36.751 "num_base_bdevs_discovered": 3, 00:11:36.751 "num_base_bdevs_operational": 3, 00:11:36.751 "base_bdevs_list": [ 00:11:36.751 { 00:11:36.751 "name": null, 00:11:36.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.751 "is_configured": false, 00:11:36.751 "data_offset": 0, 00:11:36.751 "data_size": 65536 00:11:36.751 }, 00:11:36.751 { 00:11:36.751 "name": "BaseBdev2", 00:11:36.751 "uuid": "40380e7d-7006-5f4e-8359-2b4d3d8846bb", 00:11:36.751 "is_configured": true, 00:11:36.751 "data_offset": 0, 00:11:36.751 "data_size": 65536 00:11:36.751 }, 00:11:36.751 { 00:11:36.751 "name": "BaseBdev3", 00:11:36.751 "uuid": "3b91d2b1-dd43-5306-a9a5-4aa0ce7f5d8a", 00:11:36.751 "is_configured": true, 00:11:36.751 "data_offset": 0, 00:11:36.751 "data_size": 65536 00:11:36.751 }, 00:11:36.751 { 00:11:36.751 "name": "BaseBdev4", 00:11:36.751 "uuid": "8e26d245-5764-51e3-a383-e6273687d39b", 00:11:36.751 "is_configured": true, 00:11:36.751 "data_offset": 0, 00:11:36.751 "data_size": 65536 00:11:36.751 } 00:11:36.751 ] 00:11:36.751 }' 00:11:36.751 11:54:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.751 11:54:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.011 11:54:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:37.011 11:54:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.011 11:54:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.011 [2024-12-06 11:54:34.620982] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:37.011 [2024-12-06 11:54:34.624393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d063c0 00:11:37.011 [2024-12-06 11:54:34.626312] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:37.011 11:54:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.011 11:54:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:37.953 11:54:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:37.953 11:54:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:37.953 11:54:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:37.953 11:54:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:37.953 11:54:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:37.953 11:54:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.953 11:54:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.953 11:54:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.953 11:54:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.953 11:54:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.953 11:54:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:37.953 "name": "raid_bdev1", 00:11:37.953 "uuid": "4a10b76f-5c8d-4bfd-a367-6b82db6714f6", 00:11:37.953 "strip_size_kb": 0, 00:11:37.953 "state": "online", 00:11:37.953 "raid_level": "raid1", 00:11:37.953 "superblock": false, 00:11:37.953 "num_base_bdevs": 4, 00:11:37.953 "num_base_bdevs_discovered": 4, 00:11:37.953 "num_base_bdevs_operational": 4, 00:11:37.953 "process": { 00:11:37.953 "type": "rebuild", 00:11:37.953 "target": "spare", 00:11:37.953 "progress": { 00:11:37.953 "blocks": 20480, 00:11:37.953 "percent": 31 00:11:37.953 } 00:11:37.953 }, 00:11:37.953 "base_bdevs_list": [ 00:11:37.953 { 00:11:37.953 "name": "spare", 00:11:37.953 "uuid": "c6ddb0ec-025c-5d0a-a2f9-a8e2bc8005de", 00:11:37.953 "is_configured": true, 00:11:37.953 "data_offset": 0, 00:11:37.953 "data_size": 65536 00:11:37.953 }, 00:11:37.953 { 00:11:37.953 "name": "BaseBdev2", 00:11:37.953 "uuid": "40380e7d-7006-5f4e-8359-2b4d3d8846bb", 00:11:37.953 "is_configured": true, 00:11:37.953 "data_offset": 0, 00:11:37.953 "data_size": 65536 00:11:37.953 }, 00:11:37.953 { 00:11:37.953 "name": "BaseBdev3", 00:11:37.953 "uuid": "3b91d2b1-dd43-5306-a9a5-4aa0ce7f5d8a", 00:11:37.953 "is_configured": true, 00:11:37.953 "data_offset": 0, 00:11:37.953 "data_size": 65536 00:11:37.953 }, 00:11:37.953 { 00:11:37.953 "name": "BaseBdev4", 00:11:37.953 "uuid": "8e26d245-5764-51e3-a383-e6273687d39b", 00:11:37.953 "is_configured": true, 00:11:37.953 "data_offset": 0, 00:11:37.953 "data_size": 65536 00:11:37.953 } 00:11:37.953 ] 00:11:37.953 }' 00:11:37.953 11:54:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:38.211 11:54:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:38.211 11:54:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:38.211 11:54:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:38.211 11:54:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:38.211 11:54:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.211 11:54:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.211 [2024-12-06 11:54:35.772868] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:38.211 [2024-12-06 11:54:35.830700] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:38.211 [2024-12-06 11:54:35.830752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.211 [2024-12-06 11:54:35.830769] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:38.211 [2024-12-06 11:54:35.830776] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:38.211 11:54:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.211 11:54:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:38.211 11:54:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.211 11:54:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.211 11:54:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.211 11:54:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.211 11:54:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.211 11:54:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.211 11:54:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.211 11:54:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.211 11:54:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.211 11:54:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.211 11:54:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.211 11:54:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.211 11:54:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.211 11:54:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.211 11:54:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.211 "name": "raid_bdev1", 00:11:38.211 "uuid": "4a10b76f-5c8d-4bfd-a367-6b82db6714f6", 00:11:38.211 "strip_size_kb": 0, 00:11:38.211 "state": "online", 00:11:38.211 "raid_level": "raid1", 00:11:38.211 "superblock": false, 00:11:38.211 "num_base_bdevs": 4, 00:11:38.211 "num_base_bdevs_discovered": 3, 00:11:38.211 "num_base_bdevs_operational": 3, 00:11:38.211 "base_bdevs_list": [ 00:11:38.211 { 00:11:38.211 "name": null, 00:11:38.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.211 "is_configured": false, 00:11:38.211 "data_offset": 0, 00:11:38.211 "data_size": 65536 00:11:38.211 }, 00:11:38.211 { 00:11:38.211 "name": "BaseBdev2", 00:11:38.211 "uuid": "40380e7d-7006-5f4e-8359-2b4d3d8846bb", 00:11:38.211 "is_configured": true, 00:11:38.211 "data_offset": 0, 00:11:38.211 "data_size": 65536 00:11:38.211 }, 00:11:38.211 { 00:11:38.211 "name": "BaseBdev3", 00:11:38.211 "uuid": "3b91d2b1-dd43-5306-a9a5-4aa0ce7f5d8a", 00:11:38.211 "is_configured": true, 00:11:38.211 "data_offset": 0, 00:11:38.211 "data_size": 65536 00:11:38.211 }, 00:11:38.211 { 00:11:38.211 "name": "BaseBdev4", 00:11:38.211 "uuid": "8e26d245-5764-51e3-a383-e6273687d39b", 00:11:38.211 "is_configured": true, 00:11:38.211 "data_offset": 0, 00:11:38.211 "data_size": 65536 00:11:38.211 } 00:11:38.211 ] 00:11:38.211 }' 00:11:38.211 11:54:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.211 11:54:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.470 11:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:38.470 11:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:38.470 11:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:38.470 11:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:38.470 11:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:38.470 11:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.470 11:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.470 11:54:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.470 11:54:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.470 11:54:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.470 11:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:38.470 "name": "raid_bdev1", 00:11:38.470 "uuid": "4a10b76f-5c8d-4bfd-a367-6b82db6714f6", 00:11:38.470 "strip_size_kb": 0, 00:11:38.470 "state": "online", 00:11:38.470 "raid_level": "raid1", 00:11:38.470 "superblock": false, 00:11:38.470 "num_base_bdevs": 4, 00:11:38.470 "num_base_bdevs_discovered": 3, 00:11:38.470 "num_base_bdevs_operational": 3, 00:11:38.470 "base_bdevs_list": [ 00:11:38.470 { 00:11:38.470 "name": null, 00:11:38.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.470 "is_configured": false, 00:11:38.470 "data_offset": 0, 00:11:38.470 "data_size": 65536 00:11:38.470 }, 00:11:38.470 { 00:11:38.470 "name": "BaseBdev2", 00:11:38.470 "uuid": "40380e7d-7006-5f4e-8359-2b4d3d8846bb", 00:11:38.470 "is_configured": true, 00:11:38.470 "data_offset": 0, 00:11:38.470 "data_size": 65536 00:11:38.470 }, 00:11:38.470 { 00:11:38.470 "name": "BaseBdev3", 00:11:38.470 "uuid": "3b91d2b1-dd43-5306-a9a5-4aa0ce7f5d8a", 00:11:38.470 "is_configured": true, 00:11:38.470 "data_offset": 0, 00:11:38.470 "data_size": 65536 00:11:38.470 }, 00:11:38.470 { 00:11:38.470 "name": "BaseBdev4", 00:11:38.470 "uuid": "8e26d245-5764-51e3-a383-e6273687d39b", 00:11:38.470 "is_configured": true, 00:11:38.470 "data_offset": 0, 00:11:38.470 "data_size": 65536 00:11:38.470 } 00:11:38.470 ] 00:11:38.470 }' 00:11:38.470 11:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:38.729 11:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:38.729 11:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:38.729 11:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:38.729 11:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:38.729 11:54:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.729 11:54:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.729 [2024-12-06 11:54:36.301869] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:38.729 [2024-12-06 11:54:36.305174] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06490 00:11:38.729 [2024-12-06 11:54:36.306988] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:38.729 11:54:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.729 11:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:39.665 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:39.665 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:39.665 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:39.665 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:39.665 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:39.665 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.665 11:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.665 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.665 11:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.665 11:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.665 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:39.665 "name": "raid_bdev1", 00:11:39.665 "uuid": "4a10b76f-5c8d-4bfd-a367-6b82db6714f6", 00:11:39.665 "strip_size_kb": 0, 00:11:39.665 "state": "online", 00:11:39.665 "raid_level": "raid1", 00:11:39.665 "superblock": false, 00:11:39.665 "num_base_bdevs": 4, 00:11:39.665 "num_base_bdevs_discovered": 4, 00:11:39.665 "num_base_bdevs_operational": 4, 00:11:39.665 "process": { 00:11:39.666 "type": "rebuild", 00:11:39.666 "target": "spare", 00:11:39.666 "progress": { 00:11:39.666 "blocks": 20480, 00:11:39.666 "percent": 31 00:11:39.666 } 00:11:39.666 }, 00:11:39.666 "base_bdevs_list": [ 00:11:39.666 { 00:11:39.666 "name": "spare", 00:11:39.666 "uuid": "c6ddb0ec-025c-5d0a-a2f9-a8e2bc8005de", 00:11:39.666 "is_configured": true, 00:11:39.666 "data_offset": 0, 00:11:39.666 "data_size": 65536 00:11:39.666 }, 00:11:39.666 { 00:11:39.666 "name": "BaseBdev2", 00:11:39.666 "uuid": "40380e7d-7006-5f4e-8359-2b4d3d8846bb", 00:11:39.666 "is_configured": true, 00:11:39.666 "data_offset": 0, 00:11:39.666 "data_size": 65536 00:11:39.666 }, 00:11:39.666 { 00:11:39.666 "name": "BaseBdev3", 00:11:39.666 "uuid": "3b91d2b1-dd43-5306-a9a5-4aa0ce7f5d8a", 00:11:39.666 "is_configured": true, 00:11:39.666 "data_offset": 0, 00:11:39.666 "data_size": 65536 00:11:39.666 }, 00:11:39.666 { 00:11:39.666 "name": "BaseBdev4", 00:11:39.666 "uuid": "8e26d245-5764-51e3-a383-e6273687d39b", 00:11:39.666 "is_configured": true, 00:11:39.666 "data_offset": 0, 00:11:39.666 "data_size": 65536 00:11:39.666 } 00:11:39.666 ] 00:11:39.666 }' 00:11:39.666 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:39.666 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:39.666 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.925 [2024-12-06 11:54:37.453569] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:39.925 [2024-12-06 11:54:37.510775] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d06490 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:39.925 "name": "raid_bdev1", 00:11:39.925 "uuid": "4a10b76f-5c8d-4bfd-a367-6b82db6714f6", 00:11:39.925 "strip_size_kb": 0, 00:11:39.925 "state": "online", 00:11:39.925 "raid_level": "raid1", 00:11:39.925 "superblock": false, 00:11:39.925 "num_base_bdevs": 4, 00:11:39.925 "num_base_bdevs_discovered": 3, 00:11:39.925 "num_base_bdevs_operational": 3, 00:11:39.925 "process": { 00:11:39.925 "type": "rebuild", 00:11:39.925 "target": "spare", 00:11:39.925 "progress": { 00:11:39.925 "blocks": 24576, 00:11:39.925 "percent": 37 00:11:39.925 } 00:11:39.925 }, 00:11:39.925 "base_bdevs_list": [ 00:11:39.925 { 00:11:39.925 "name": "spare", 00:11:39.925 "uuid": "c6ddb0ec-025c-5d0a-a2f9-a8e2bc8005de", 00:11:39.925 "is_configured": true, 00:11:39.925 "data_offset": 0, 00:11:39.925 "data_size": 65536 00:11:39.925 }, 00:11:39.925 { 00:11:39.925 "name": null, 00:11:39.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.925 "is_configured": false, 00:11:39.925 "data_offset": 0, 00:11:39.925 "data_size": 65536 00:11:39.925 }, 00:11:39.925 { 00:11:39.925 "name": "BaseBdev3", 00:11:39.925 "uuid": "3b91d2b1-dd43-5306-a9a5-4aa0ce7f5d8a", 00:11:39.925 "is_configured": true, 00:11:39.925 "data_offset": 0, 00:11:39.925 "data_size": 65536 00:11:39.925 }, 00:11:39.925 { 00:11:39.925 "name": "BaseBdev4", 00:11:39.925 "uuid": "8e26d245-5764-51e3-a383-e6273687d39b", 00:11:39.925 "is_configured": true, 00:11:39.925 "data_offset": 0, 00:11:39.925 "data_size": 65536 00:11:39.925 } 00:11:39.925 ] 00:11:39.925 }' 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=351 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:39.925 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:39.926 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:39.926 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:39.926 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.926 11:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.926 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.926 11:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.185 11:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.185 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:40.185 "name": "raid_bdev1", 00:11:40.185 "uuid": "4a10b76f-5c8d-4bfd-a367-6b82db6714f6", 00:11:40.185 "strip_size_kb": 0, 00:11:40.185 "state": "online", 00:11:40.185 "raid_level": "raid1", 00:11:40.185 "superblock": false, 00:11:40.185 "num_base_bdevs": 4, 00:11:40.185 "num_base_bdevs_discovered": 3, 00:11:40.185 "num_base_bdevs_operational": 3, 00:11:40.185 "process": { 00:11:40.185 "type": "rebuild", 00:11:40.185 "target": "spare", 00:11:40.185 "progress": { 00:11:40.185 "blocks": 26624, 00:11:40.185 "percent": 40 00:11:40.185 } 00:11:40.185 }, 00:11:40.185 "base_bdevs_list": [ 00:11:40.185 { 00:11:40.185 "name": "spare", 00:11:40.185 "uuid": "c6ddb0ec-025c-5d0a-a2f9-a8e2bc8005de", 00:11:40.185 "is_configured": true, 00:11:40.185 "data_offset": 0, 00:11:40.185 "data_size": 65536 00:11:40.185 }, 00:11:40.185 { 00:11:40.185 "name": null, 00:11:40.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.185 "is_configured": false, 00:11:40.185 "data_offset": 0, 00:11:40.185 "data_size": 65536 00:11:40.185 }, 00:11:40.185 { 00:11:40.185 "name": "BaseBdev3", 00:11:40.185 "uuid": "3b91d2b1-dd43-5306-a9a5-4aa0ce7f5d8a", 00:11:40.185 "is_configured": true, 00:11:40.185 "data_offset": 0, 00:11:40.185 "data_size": 65536 00:11:40.185 }, 00:11:40.185 { 00:11:40.185 "name": "BaseBdev4", 00:11:40.185 "uuid": "8e26d245-5764-51e3-a383-e6273687d39b", 00:11:40.185 "is_configured": true, 00:11:40.185 "data_offset": 0, 00:11:40.185 "data_size": 65536 00:11:40.185 } 00:11:40.185 ] 00:11:40.185 }' 00:11:40.185 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:40.185 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:40.185 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:40.185 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:40.185 11:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:41.217 11:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:41.217 11:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:41.217 11:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:41.217 11:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:41.217 11:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:41.217 11:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:41.217 11:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.217 11:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.217 11:54:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.217 11:54:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.217 11:54:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.217 11:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:41.217 "name": "raid_bdev1", 00:11:41.217 "uuid": "4a10b76f-5c8d-4bfd-a367-6b82db6714f6", 00:11:41.217 "strip_size_kb": 0, 00:11:41.217 "state": "online", 00:11:41.217 "raid_level": "raid1", 00:11:41.217 "superblock": false, 00:11:41.217 "num_base_bdevs": 4, 00:11:41.217 "num_base_bdevs_discovered": 3, 00:11:41.217 "num_base_bdevs_operational": 3, 00:11:41.217 "process": { 00:11:41.217 "type": "rebuild", 00:11:41.217 "target": "spare", 00:11:41.217 "progress": { 00:11:41.217 "blocks": 49152, 00:11:41.217 "percent": 75 00:11:41.217 } 00:11:41.217 }, 00:11:41.217 "base_bdevs_list": [ 00:11:41.217 { 00:11:41.217 "name": "spare", 00:11:41.217 "uuid": "c6ddb0ec-025c-5d0a-a2f9-a8e2bc8005de", 00:11:41.217 "is_configured": true, 00:11:41.217 "data_offset": 0, 00:11:41.218 "data_size": 65536 00:11:41.218 }, 00:11:41.218 { 00:11:41.218 "name": null, 00:11:41.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.218 "is_configured": false, 00:11:41.218 "data_offset": 0, 00:11:41.218 "data_size": 65536 00:11:41.218 }, 00:11:41.218 { 00:11:41.218 "name": "BaseBdev3", 00:11:41.218 "uuid": "3b91d2b1-dd43-5306-a9a5-4aa0ce7f5d8a", 00:11:41.218 "is_configured": true, 00:11:41.218 "data_offset": 0, 00:11:41.218 "data_size": 65536 00:11:41.218 }, 00:11:41.218 { 00:11:41.218 "name": "BaseBdev4", 00:11:41.218 "uuid": "8e26d245-5764-51e3-a383-e6273687d39b", 00:11:41.218 "is_configured": true, 00:11:41.218 "data_offset": 0, 00:11:41.218 "data_size": 65536 00:11:41.218 } 00:11:41.218 ] 00:11:41.218 }' 00:11:41.218 11:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:41.218 11:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:41.218 11:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:41.218 11:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:41.218 11:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:41.786 [2024-12-06 11:54:39.517308] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:41.786 [2024-12-06 11:54:39.517422] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:41.786 [2024-12-06 11:54:39.517466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.356 11:54:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:42.356 11:54:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:42.356 11:54:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:42.356 11:54:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:42.356 11:54:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:42.356 11:54:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:42.356 11:54:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.356 11:54:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.356 11:54:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.356 11:54:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.356 11:54:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.356 11:54:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:42.356 "name": "raid_bdev1", 00:11:42.356 "uuid": "4a10b76f-5c8d-4bfd-a367-6b82db6714f6", 00:11:42.356 "strip_size_kb": 0, 00:11:42.356 "state": "online", 00:11:42.356 "raid_level": "raid1", 00:11:42.356 "superblock": false, 00:11:42.356 "num_base_bdevs": 4, 00:11:42.356 "num_base_bdevs_discovered": 3, 00:11:42.356 "num_base_bdevs_operational": 3, 00:11:42.356 "base_bdevs_list": [ 00:11:42.356 { 00:11:42.356 "name": "spare", 00:11:42.356 "uuid": "c6ddb0ec-025c-5d0a-a2f9-a8e2bc8005de", 00:11:42.356 "is_configured": true, 00:11:42.356 "data_offset": 0, 00:11:42.356 "data_size": 65536 00:11:42.356 }, 00:11:42.356 { 00:11:42.356 "name": null, 00:11:42.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.356 "is_configured": false, 00:11:42.356 "data_offset": 0, 00:11:42.356 "data_size": 65536 00:11:42.356 }, 00:11:42.356 { 00:11:42.356 "name": "BaseBdev3", 00:11:42.356 "uuid": "3b91d2b1-dd43-5306-a9a5-4aa0ce7f5d8a", 00:11:42.356 "is_configured": true, 00:11:42.356 "data_offset": 0, 00:11:42.356 "data_size": 65536 00:11:42.356 }, 00:11:42.356 { 00:11:42.356 "name": "BaseBdev4", 00:11:42.356 "uuid": "8e26d245-5764-51e3-a383-e6273687d39b", 00:11:42.356 "is_configured": true, 00:11:42.356 "data_offset": 0, 00:11:42.356 "data_size": 65536 00:11:42.356 } 00:11:42.356 ] 00:11:42.356 }' 00:11:42.356 11:54:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:42.356 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:42.356 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:42.356 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:42.356 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:42.356 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:42.356 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:42.356 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:42.356 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:42.356 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:42.356 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.356 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.356 11:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.356 11:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.356 11:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.356 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:42.356 "name": "raid_bdev1", 00:11:42.356 "uuid": "4a10b76f-5c8d-4bfd-a367-6b82db6714f6", 00:11:42.356 "strip_size_kb": 0, 00:11:42.356 "state": "online", 00:11:42.356 "raid_level": "raid1", 00:11:42.356 "superblock": false, 00:11:42.356 "num_base_bdevs": 4, 00:11:42.356 "num_base_bdevs_discovered": 3, 00:11:42.356 "num_base_bdevs_operational": 3, 00:11:42.356 "base_bdevs_list": [ 00:11:42.356 { 00:11:42.356 "name": "spare", 00:11:42.356 "uuid": "c6ddb0ec-025c-5d0a-a2f9-a8e2bc8005de", 00:11:42.356 "is_configured": true, 00:11:42.356 "data_offset": 0, 00:11:42.356 "data_size": 65536 00:11:42.356 }, 00:11:42.356 { 00:11:42.356 "name": null, 00:11:42.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.356 "is_configured": false, 00:11:42.356 "data_offset": 0, 00:11:42.356 "data_size": 65536 00:11:42.356 }, 00:11:42.356 { 00:11:42.356 "name": "BaseBdev3", 00:11:42.356 "uuid": "3b91d2b1-dd43-5306-a9a5-4aa0ce7f5d8a", 00:11:42.356 "is_configured": true, 00:11:42.356 "data_offset": 0, 00:11:42.356 "data_size": 65536 00:11:42.356 }, 00:11:42.356 { 00:11:42.356 "name": "BaseBdev4", 00:11:42.356 "uuid": "8e26d245-5764-51e3-a383-e6273687d39b", 00:11:42.356 "is_configured": true, 00:11:42.356 "data_offset": 0, 00:11:42.356 "data_size": 65536 00:11:42.356 } 00:11:42.356 ] 00:11:42.356 }' 00:11:42.356 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:42.616 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:42.616 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:42.616 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:42.616 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:42.616 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.616 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.616 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.616 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.616 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:42.616 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.616 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.616 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.616 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.616 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.616 11:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.616 11:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.616 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.616 11:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.616 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.616 "name": "raid_bdev1", 00:11:42.616 "uuid": "4a10b76f-5c8d-4bfd-a367-6b82db6714f6", 00:11:42.616 "strip_size_kb": 0, 00:11:42.616 "state": "online", 00:11:42.616 "raid_level": "raid1", 00:11:42.616 "superblock": false, 00:11:42.616 "num_base_bdevs": 4, 00:11:42.616 "num_base_bdevs_discovered": 3, 00:11:42.616 "num_base_bdevs_operational": 3, 00:11:42.616 "base_bdevs_list": [ 00:11:42.616 { 00:11:42.616 "name": "spare", 00:11:42.616 "uuid": "c6ddb0ec-025c-5d0a-a2f9-a8e2bc8005de", 00:11:42.616 "is_configured": true, 00:11:42.616 "data_offset": 0, 00:11:42.616 "data_size": 65536 00:11:42.616 }, 00:11:42.616 { 00:11:42.616 "name": null, 00:11:42.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.616 "is_configured": false, 00:11:42.616 "data_offset": 0, 00:11:42.616 "data_size": 65536 00:11:42.616 }, 00:11:42.616 { 00:11:42.616 "name": "BaseBdev3", 00:11:42.616 "uuid": "3b91d2b1-dd43-5306-a9a5-4aa0ce7f5d8a", 00:11:42.616 "is_configured": true, 00:11:42.616 "data_offset": 0, 00:11:42.616 "data_size": 65536 00:11:42.616 }, 00:11:42.616 { 00:11:42.616 "name": "BaseBdev4", 00:11:42.616 "uuid": "8e26d245-5764-51e3-a383-e6273687d39b", 00:11:42.616 "is_configured": true, 00:11:42.616 "data_offset": 0, 00:11:42.616 "data_size": 65536 00:11:42.616 } 00:11:42.616 ] 00:11:42.616 }' 00:11:42.616 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.616 11:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.876 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:42.876 11:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.876 11:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.876 [2024-12-06 11:54:40.631158] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:42.876 [2024-12-06 11:54:40.631225] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:42.876 [2024-12-06 11:54:40.631376] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.876 [2024-12-06 11:54:40.631478] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.876 [2024-12-06 11:54:40.631527] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:43.136 11:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.136 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.136 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:43.136 11:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.136 11:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.136 11:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.136 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:43.136 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:43.136 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:43.136 11:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:43.136 11:54:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:43.136 11:54:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:43.136 11:54:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:43.136 11:54:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:43.136 11:54:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:43.136 11:54:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:43.136 11:54:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:43.136 11:54:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:43.136 11:54:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:43.136 /dev/nbd0 00:11:43.395 11:54:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:43.395 11:54:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:43.395 11:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:43.395 11:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:43.395 11:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:43.395 11:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:43.395 11:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:43.395 11:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:43.395 11:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:43.395 11:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:43.395 11:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:43.395 1+0 records in 00:11:43.395 1+0 records out 00:11:43.395 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377977 s, 10.8 MB/s 00:11:43.395 11:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.395 11:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:43.395 11:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.395 11:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:43.395 11:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:43.395 11:54:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:43.395 11:54:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:43.395 11:54:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:43.395 /dev/nbd1 00:11:43.395 11:54:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:43.395 11:54:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:43.395 11:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:43.395 11:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:43.395 11:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:43.395 11:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:43.395 11:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:43.395 11:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:43.395 11:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:43.395 11:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:43.395 11:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:43.395 1+0 records in 00:11:43.395 1+0 records out 00:11:43.395 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345234 s, 11.9 MB/s 00:11:43.395 11:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.654 11:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:43.654 11:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.654 11:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:43.654 11:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:43.654 11:54:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:43.654 11:54:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:43.654 11:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:43.654 11:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:43.654 11:54:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:43.654 11:54:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:43.654 11:54:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:43.654 11:54:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:43.654 11:54:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:43.654 11:54:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:43.654 11:54:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:43.654 11:54:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:43.654 11:54:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:43.654 11:54:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:43.654 11:54:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:43.654 11:54:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:43.654 11:54:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:43.654 11:54:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:43.654 11:54:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:43.654 11:54:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:43.913 11:54:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:43.913 11:54:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:43.913 11:54:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:43.913 11:54:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:43.913 11:54:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:43.913 11:54:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:43.913 11:54:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:43.913 11:54:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:43.913 11:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:43.913 11:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 87799 00:11:43.913 11:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 87799 ']' 00:11:43.913 11:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 87799 00:11:43.913 11:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:11:43.913 11:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:43.913 11:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87799 00:11:43.913 11:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:43.913 killing process with pid 87799 00:11:43.913 Received shutdown signal, test time was about 60.000000 seconds 00:11:43.913 00:11:43.913 Latency(us) 00:11:43.913 [2024-12-06T11:54:41.671Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:43.913 [2024-12-06T11:54:41.671Z] =================================================================================================================== 00:11:43.913 [2024-12-06T11:54:41.671Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:43.913 11:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:43.913 11:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87799' 00:11:43.913 11:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 87799 00:11:43.913 [2024-12-06 11:54:41.660886] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:43.913 11:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 87799 00:11:44.171 [2024-12-06 11:54:41.711078] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:44.430 11:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:44.430 00:11:44.430 real 0m14.831s 00:11:44.430 user 0m16.634s 00:11:44.430 sys 0m2.670s 00:11:44.430 11:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:44.430 11:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.430 ************************************ 00:11:44.430 END TEST raid_rebuild_test 00:11:44.430 ************************************ 00:11:44.430 11:54:41 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:11:44.430 11:54:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:44.430 11:54:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:44.430 11:54:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:44.430 ************************************ 00:11:44.430 START TEST raid_rebuild_test_sb 00:11:44.430 ************************************ 00:11:44.430 11:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:11:44.430 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:44.430 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:11:44.430 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:44.430 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:44.430 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:44.430 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:44.430 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:44.430 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:44.430 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:44.430 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:44.430 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:44.430 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:44.430 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:44.430 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:11:44.430 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:44.430 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:44.430 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:11:44.430 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:44.430 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:44.430 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:44.430 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:44.430 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:44.430 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:44.431 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:44.431 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:44.431 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:44.431 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:44.431 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:44.431 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:44.431 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:44.431 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=88223 00:11:44.431 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:44.431 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 88223 00:11:44.431 11:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 88223 ']' 00:11:44.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.431 11:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.431 11:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:44.431 11:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.431 11:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:44.431 11:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.431 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:44.431 Zero copy mechanism will not be used. 00:11:44.431 [2024-12-06 11:54:42.101952] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:44.431 [2024-12-06 11:54:42.102075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88223 ] 00:11:44.690 [2024-12-06 11:54:42.246588] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.690 [2024-12-06 11:54:42.290268] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.690 [2024-12-06 11:54:42.331682] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.690 [2024-12-06 11:54:42.331717] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.258 11:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:45.258 11:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:45.258 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:45.258 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:45.258 11:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.258 11:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.258 BaseBdev1_malloc 00:11:45.258 11:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.258 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:45.258 11:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.258 11:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.258 [2024-12-06 11:54:42.937275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:45.258 [2024-12-06 11:54:42.937381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.258 [2024-12-06 11:54:42.937410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:45.258 [2024-12-06 11:54:42.937423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.258 [2024-12-06 11:54:42.939512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.258 [2024-12-06 11:54:42.939546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:45.258 BaseBdev1 00:11:45.258 11:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.258 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:45.258 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:45.258 11:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.258 11:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.258 BaseBdev2_malloc 00:11:45.258 11:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.258 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:45.258 11:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.258 11:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.258 [2024-12-06 11:54:42.981915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:45.258 [2024-12-06 11:54:42.982011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.258 [2024-12-06 11:54:42.982056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:45.258 [2024-12-06 11:54:42.982077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.258 [2024-12-06 11:54:42.986861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.258 [2024-12-06 11:54:42.986932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:45.258 BaseBdev2 00:11:45.258 11:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.258 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:45.258 11:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:45.258 11:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.258 11:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.258 BaseBdev3_malloc 00:11:45.258 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.258 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:11:45.258 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.258 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.258 [2024-12-06 11:54:43.012519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:11:45.258 [2024-12-06 11:54:43.012580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.258 [2024-12-06 11:54:43.012619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:45.258 [2024-12-06 11:54:43.012628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.518 [2024-12-06 11:54:43.014603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.518 [2024-12-06 11:54:43.014684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:45.518 BaseBdev3 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.518 BaseBdev4_malloc 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.518 [2024-12-06 11:54:43.040871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:11:45.518 [2024-12-06 11:54:43.040914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.518 [2024-12-06 11:54:43.040950] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:45.518 [2024-12-06 11:54:43.040958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.518 [2024-12-06 11:54:43.042991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.518 [2024-12-06 11:54:43.043022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:45.518 BaseBdev4 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.518 spare_malloc 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.518 spare_delay 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.518 [2024-12-06 11:54:43.081132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:45.518 [2024-12-06 11:54:43.081174] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.518 [2024-12-06 11:54:43.081207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:45.518 [2024-12-06 11:54:43.081216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.518 [2024-12-06 11:54:43.083235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.518 [2024-12-06 11:54:43.083340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:45.518 spare 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.518 [2024-12-06 11:54:43.093179] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.518 [2024-12-06 11:54:43.094985] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:45.518 [2024-12-06 11:54:43.095057] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:45.518 [2024-12-06 11:54:43.095111] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:45.518 [2024-12-06 11:54:43.095283] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:45.518 [2024-12-06 11:54:43.095299] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:45.518 [2024-12-06 11:54:43.095538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:11:45.518 [2024-12-06 11:54:43.095659] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:45.518 [2024-12-06 11:54:43.095693] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:45.518 [2024-12-06 11:54:43.095810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.518 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.518 "name": "raid_bdev1", 00:11:45.518 "uuid": "c12b4ba6-f798-4c9e-a85a-c128afff383d", 00:11:45.518 "strip_size_kb": 0, 00:11:45.518 "state": "online", 00:11:45.518 "raid_level": "raid1", 00:11:45.518 "superblock": true, 00:11:45.518 "num_base_bdevs": 4, 00:11:45.518 "num_base_bdevs_discovered": 4, 00:11:45.518 "num_base_bdevs_operational": 4, 00:11:45.518 "base_bdevs_list": [ 00:11:45.518 { 00:11:45.518 "name": "BaseBdev1", 00:11:45.518 "uuid": "0f4b993d-2676-503d-9fb3-90bcbf126053", 00:11:45.518 "is_configured": true, 00:11:45.518 "data_offset": 2048, 00:11:45.518 "data_size": 63488 00:11:45.518 }, 00:11:45.518 { 00:11:45.518 "name": "BaseBdev2", 00:11:45.518 "uuid": "b0246306-bdee-53be-b692-a07160016cbd", 00:11:45.518 "is_configured": true, 00:11:45.518 "data_offset": 2048, 00:11:45.518 "data_size": 63488 00:11:45.518 }, 00:11:45.518 { 00:11:45.518 "name": "BaseBdev3", 00:11:45.518 "uuid": "f6c1cacd-2e78-5206-a131-b588660c08b2", 00:11:45.518 "is_configured": true, 00:11:45.518 "data_offset": 2048, 00:11:45.518 "data_size": 63488 00:11:45.518 }, 00:11:45.518 { 00:11:45.518 "name": "BaseBdev4", 00:11:45.518 "uuid": "bff62fb2-0b51-595a-973e-09585d12b5b7", 00:11:45.518 "is_configured": true, 00:11:45.518 "data_offset": 2048, 00:11:45.519 "data_size": 63488 00:11:45.519 } 00:11:45.519 ] 00:11:45.519 }' 00:11:45.519 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.519 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.778 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:45.778 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.778 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:45.778 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.778 [2024-12-06 11:54:43.524693] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.037 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.037 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:46.037 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.037 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.037 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.037 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:46.037 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.037 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:46.037 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:46.037 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:46.037 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:46.037 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:46.037 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:46.037 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:46.037 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:46.037 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:46.037 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:46.037 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:46.037 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:46.037 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:46.037 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:46.037 [2024-12-06 11:54:43.752104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:11:46.037 /dev/nbd0 00:11:46.296 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:46.296 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:46.296 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:46.296 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:46.296 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:46.296 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:46.296 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:46.296 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:46.296 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:46.296 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:46.296 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:46.296 1+0 records in 00:11:46.296 1+0 records out 00:11:46.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401454 s, 10.2 MB/s 00:11:46.296 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.296 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:46.296 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.296 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:46.296 11:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:46.296 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:46.296 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:46.296 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:46.296 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:46.296 11:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:11:51.573 63488+0 records in 00:11:51.573 63488+0 records out 00:11:51.573 32505856 bytes (33 MB, 31 MiB) copied, 5.16082 s, 6.3 MB/s 00:11:51.573 11:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:51.573 11:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:51.573 11:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:51.573 11:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:51.573 11:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:51.573 11:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:51.573 11:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:51.573 [2024-12-06 11:54:49.185484] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.573 11:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:51.573 11:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:51.573 11:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:51.573 11:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:51.573 11:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:51.573 11:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:51.573 11:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:51.573 11:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:51.573 11:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:51.573 11:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.573 11:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.573 [2024-12-06 11:54:49.216150] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:51.573 11:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.573 11:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:51.573 11:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.573 11:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.573 11:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.573 11:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.573 11:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:51.573 11:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.573 11:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.573 11:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.573 11:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.573 11:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.573 11:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.573 11:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.573 11:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.573 11:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.573 11:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.574 "name": "raid_bdev1", 00:11:51.574 "uuid": "c12b4ba6-f798-4c9e-a85a-c128afff383d", 00:11:51.574 "strip_size_kb": 0, 00:11:51.574 "state": "online", 00:11:51.574 "raid_level": "raid1", 00:11:51.574 "superblock": true, 00:11:51.574 "num_base_bdevs": 4, 00:11:51.574 "num_base_bdevs_discovered": 3, 00:11:51.574 "num_base_bdevs_operational": 3, 00:11:51.574 "base_bdevs_list": [ 00:11:51.574 { 00:11:51.574 "name": null, 00:11:51.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.574 "is_configured": false, 00:11:51.574 "data_offset": 0, 00:11:51.574 "data_size": 63488 00:11:51.574 }, 00:11:51.574 { 00:11:51.574 "name": "BaseBdev2", 00:11:51.574 "uuid": "b0246306-bdee-53be-b692-a07160016cbd", 00:11:51.574 "is_configured": true, 00:11:51.574 "data_offset": 2048, 00:11:51.574 "data_size": 63488 00:11:51.574 }, 00:11:51.574 { 00:11:51.574 "name": "BaseBdev3", 00:11:51.574 "uuid": "f6c1cacd-2e78-5206-a131-b588660c08b2", 00:11:51.574 "is_configured": true, 00:11:51.574 "data_offset": 2048, 00:11:51.574 "data_size": 63488 00:11:51.574 }, 00:11:51.574 { 00:11:51.574 "name": "BaseBdev4", 00:11:51.574 "uuid": "bff62fb2-0b51-595a-973e-09585d12b5b7", 00:11:51.574 "is_configured": true, 00:11:51.574 "data_offset": 2048, 00:11:51.574 "data_size": 63488 00:11:51.574 } 00:11:51.574 ] 00:11:51.574 }' 00:11:51.574 11:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.574 11:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.834 11:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:51.834 11:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.834 11:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.834 [2024-12-06 11:54:49.579554] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:51.834 [2024-12-06 11:54:49.585530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e420 00:11:51.834 11:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.834 11:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:51.834 [2024-12-06 11:54:49.587777] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:53.214 11:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:53.214 11:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:53.214 11:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:53.214 11:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:53.214 11:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:53.214 11:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.214 11:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.214 11:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.214 11:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.214 11:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.214 11:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:53.214 "name": "raid_bdev1", 00:11:53.214 "uuid": "c12b4ba6-f798-4c9e-a85a-c128afff383d", 00:11:53.214 "strip_size_kb": 0, 00:11:53.214 "state": "online", 00:11:53.214 "raid_level": "raid1", 00:11:53.214 "superblock": true, 00:11:53.214 "num_base_bdevs": 4, 00:11:53.214 "num_base_bdevs_discovered": 4, 00:11:53.214 "num_base_bdevs_operational": 4, 00:11:53.214 "process": { 00:11:53.214 "type": "rebuild", 00:11:53.214 "target": "spare", 00:11:53.214 "progress": { 00:11:53.214 "blocks": 20480, 00:11:53.214 "percent": 32 00:11:53.214 } 00:11:53.214 }, 00:11:53.214 "base_bdevs_list": [ 00:11:53.214 { 00:11:53.214 "name": "spare", 00:11:53.214 "uuid": "b7d5cfec-ea19-553b-bef5-0f2ca583201b", 00:11:53.214 "is_configured": true, 00:11:53.214 "data_offset": 2048, 00:11:53.214 "data_size": 63488 00:11:53.214 }, 00:11:53.214 { 00:11:53.214 "name": "BaseBdev2", 00:11:53.214 "uuid": "b0246306-bdee-53be-b692-a07160016cbd", 00:11:53.214 "is_configured": true, 00:11:53.214 "data_offset": 2048, 00:11:53.214 "data_size": 63488 00:11:53.214 }, 00:11:53.214 { 00:11:53.214 "name": "BaseBdev3", 00:11:53.214 "uuid": "f6c1cacd-2e78-5206-a131-b588660c08b2", 00:11:53.214 "is_configured": true, 00:11:53.214 "data_offset": 2048, 00:11:53.214 "data_size": 63488 00:11:53.214 }, 00:11:53.214 { 00:11:53.215 "name": "BaseBdev4", 00:11:53.215 "uuid": "bff62fb2-0b51-595a-973e-09585d12b5b7", 00:11:53.215 "is_configured": true, 00:11:53.215 "data_offset": 2048, 00:11:53.215 "data_size": 63488 00:11:53.215 } 00:11:53.215 ] 00:11:53.215 }' 00:11:53.215 11:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:53.215 11:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:53.215 11:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:53.215 11:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:53.215 11:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:53.215 11:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.215 11:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.215 [2024-12-06 11:54:50.747696] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:53.215 [2024-12-06 11:54:50.796395] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:53.215 [2024-12-06 11:54:50.796474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.215 [2024-12-06 11:54:50.796497] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:53.215 [2024-12-06 11:54:50.796506] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:53.215 11:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.215 11:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:53.215 11:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.215 11:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.215 11:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.215 11:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.215 11:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:53.215 11:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.215 11:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.215 11:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.215 11:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.215 11:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.215 11:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.215 11:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.215 11:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.215 11:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.215 11:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.215 "name": "raid_bdev1", 00:11:53.215 "uuid": "c12b4ba6-f798-4c9e-a85a-c128afff383d", 00:11:53.215 "strip_size_kb": 0, 00:11:53.215 "state": "online", 00:11:53.215 "raid_level": "raid1", 00:11:53.215 "superblock": true, 00:11:53.215 "num_base_bdevs": 4, 00:11:53.215 "num_base_bdevs_discovered": 3, 00:11:53.215 "num_base_bdevs_operational": 3, 00:11:53.215 "base_bdevs_list": [ 00:11:53.215 { 00:11:53.215 "name": null, 00:11:53.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.215 "is_configured": false, 00:11:53.215 "data_offset": 0, 00:11:53.215 "data_size": 63488 00:11:53.215 }, 00:11:53.215 { 00:11:53.215 "name": "BaseBdev2", 00:11:53.215 "uuid": "b0246306-bdee-53be-b692-a07160016cbd", 00:11:53.215 "is_configured": true, 00:11:53.215 "data_offset": 2048, 00:11:53.215 "data_size": 63488 00:11:53.215 }, 00:11:53.215 { 00:11:53.215 "name": "BaseBdev3", 00:11:53.215 "uuid": "f6c1cacd-2e78-5206-a131-b588660c08b2", 00:11:53.215 "is_configured": true, 00:11:53.215 "data_offset": 2048, 00:11:53.215 "data_size": 63488 00:11:53.215 }, 00:11:53.215 { 00:11:53.215 "name": "BaseBdev4", 00:11:53.215 "uuid": "bff62fb2-0b51-595a-973e-09585d12b5b7", 00:11:53.215 "is_configured": true, 00:11:53.215 "data_offset": 2048, 00:11:53.215 "data_size": 63488 00:11:53.215 } 00:11:53.215 ] 00:11:53.215 }' 00:11:53.215 11:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.215 11:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.475 11:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:53.475 11:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:53.475 11:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:53.475 11:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:53.475 11:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:53.475 11:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.475 11:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.475 11:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.475 11:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.475 11:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.735 11:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:53.735 "name": "raid_bdev1", 00:11:53.735 "uuid": "c12b4ba6-f798-4c9e-a85a-c128afff383d", 00:11:53.735 "strip_size_kb": 0, 00:11:53.735 "state": "online", 00:11:53.735 "raid_level": "raid1", 00:11:53.735 "superblock": true, 00:11:53.735 "num_base_bdevs": 4, 00:11:53.735 "num_base_bdevs_discovered": 3, 00:11:53.735 "num_base_bdevs_operational": 3, 00:11:53.735 "base_bdevs_list": [ 00:11:53.735 { 00:11:53.735 "name": null, 00:11:53.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.735 "is_configured": false, 00:11:53.735 "data_offset": 0, 00:11:53.735 "data_size": 63488 00:11:53.735 }, 00:11:53.735 { 00:11:53.735 "name": "BaseBdev2", 00:11:53.735 "uuid": "b0246306-bdee-53be-b692-a07160016cbd", 00:11:53.735 "is_configured": true, 00:11:53.735 "data_offset": 2048, 00:11:53.735 "data_size": 63488 00:11:53.735 }, 00:11:53.735 { 00:11:53.735 "name": "BaseBdev3", 00:11:53.735 "uuid": "f6c1cacd-2e78-5206-a131-b588660c08b2", 00:11:53.735 "is_configured": true, 00:11:53.735 "data_offset": 2048, 00:11:53.735 "data_size": 63488 00:11:53.735 }, 00:11:53.735 { 00:11:53.735 "name": "BaseBdev4", 00:11:53.735 "uuid": "bff62fb2-0b51-595a-973e-09585d12b5b7", 00:11:53.735 "is_configured": true, 00:11:53.735 "data_offset": 2048, 00:11:53.735 "data_size": 63488 00:11:53.735 } 00:11:53.735 ] 00:11:53.735 }' 00:11:53.735 11:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:53.735 11:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:53.735 11:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:53.735 11:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:53.735 11:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:53.735 11:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.735 11:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.735 [2024-12-06 11:54:51.342893] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:53.735 [2024-12-06 11:54:51.348614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e4f0 00:11:53.735 11:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.735 11:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:53.735 [2024-12-06 11:54:51.350862] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:54.674 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:54.674 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:54.674 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:54.674 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:54.674 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:54.674 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.674 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.674 11:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.674 11:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.674 11:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.674 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:54.674 "name": "raid_bdev1", 00:11:54.674 "uuid": "c12b4ba6-f798-4c9e-a85a-c128afff383d", 00:11:54.674 "strip_size_kb": 0, 00:11:54.674 "state": "online", 00:11:54.674 "raid_level": "raid1", 00:11:54.674 "superblock": true, 00:11:54.674 "num_base_bdevs": 4, 00:11:54.674 "num_base_bdevs_discovered": 4, 00:11:54.674 "num_base_bdevs_operational": 4, 00:11:54.674 "process": { 00:11:54.674 "type": "rebuild", 00:11:54.674 "target": "spare", 00:11:54.674 "progress": { 00:11:54.674 "blocks": 20480, 00:11:54.674 "percent": 32 00:11:54.674 } 00:11:54.674 }, 00:11:54.674 "base_bdevs_list": [ 00:11:54.674 { 00:11:54.674 "name": "spare", 00:11:54.674 "uuid": "b7d5cfec-ea19-553b-bef5-0f2ca583201b", 00:11:54.674 "is_configured": true, 00:11:54.674 "data_offset": 2048, 00:11:54.674 "data_size": 63488 00:11:54.674 }, 00:11:54.674 { 00:11:54.674 "name": "BaseBdev2", 00:11:54.674 "uuid": "b0246306-bdee-53be-b692-a07160016cbd", 00:11:54.674 "is_configured": true, 00:11:54.674 "data_offset": 2048, 00:11:54.674 "data_size": 63488 00:11:54.674 }, 00:11:54.674 { 00:11:54.674 "name": "BaseBdev3", 00:11:54.674 "uuid": "f6c1cacd-2e78-5206-a131-b588660c08b2", 00:11:54.674 "is_configured": true, 00:11:54.674 "data_offset": 2048, 00:11:54.674 "data_size": 63488 00:11:54.674 }, 00:11:54.674 { 00:11:54.674 "name": "BaseBdev4", 00:11:54.674 "uuid": "bff62fb2-0b51-595a-973e-09585d12b5b7", 00:11:54.674 "is_configured": true, 00:11:54.674 "data_offset": 2048, 00:11:54.674 "data_size": 63488 00:11:54.674 } 00:11:54.674 ] 00:11:54.675 }' 00:11:54.675 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:54.934 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:54.934 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:54.934 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:54.935 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:54.935 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:54.935 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:54.935 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:11:54.935 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:54.935 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:11:54.935 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:54.935 11:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.935 11:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.935 [2024-12-06 11:54:52.519083] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:54.935 [2024-12-06 11:54:52.658730] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000c3e4f0 00:11:54.935 11:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.935 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:11:54.935 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:11:54.935 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:54.935 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:54.935 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:54.935 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:54.935 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:54.935 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.935 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.935 11:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.935 11:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.195 11:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.195 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:55.195 "name": "raid_bdev1", 00:11:55.195 "uuid": "c12b4ba6-f798-4c9e-a85a-c128afff383d", 00:11:55.195 "strip_size_kb": 0, 00:11:55.195 "state": "online", 00:11:55.195 "raid_level": "raid1", 00:11:55.195 "superblock": true, 00:11:55.195 "num_base_bdevs": 4, 00:11:55.195 "num_base_bdevs_discovered": 3, 00:11:55.195 "num_base_bdevs_operational": 3, 00:11:55.195 "process": { 00:11:55.195 "type": "rebuild", 00:11:55.195 "target": "spare", 00:11:55.195 "progress": { 00:11:55.195 "blocks": 24576, 00:11:55.195 "percent": 38 00:11:55.195 } 00:11:55.195 }, 00:11:55.195 "base_bdevs_list": [ 00:11:55.195 { 00:11:55.195 "name": "spare", 00:11:55.195 "uuid": "b7d5cfec-ea19-553b-bef5-0f2ca583201b", 00:11:55.195 "is_configured": true, 00:11:55.195 "data_offset": 2048, 00:11:55.195 "data_size": 63488 00:11:55.195 }, 00:11:55.195 { 00:11:55.195 "name": null, 00:11:55.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.195 "is_configured": false, 00:11:55.195 "data_offset": 0, 00:11:55.195 "data_size": 63488 00:11:55.195 }, 00:11:55.195 { 00:11:55.195 "name": "BaseBdev3", 00:11:55.195 "uuid": "f6c1cacd-2e78-5206-a131-b588660c08b2", 00:11:55.195 "is_configured": true, 00:11:55.195 "data_offset": 2048, 00:11:55.195 "data_size": 63488 00:11:55.195 }, 00:11:55.195 { 00:11:55.195 "name": "BaseBdev4", 00:11:55.195 "uuid": "bff62fb2-0b51-595a-973e-09585d12b5b7", 00:11:55.195 "is_configured": true, 00:11:55.195 "data_offset": 2048, 00:11:55.195 "data_size": 63488 00:11:55.195 } 00:11:55.195 ] 00:11:55.195 }' 00:11:55.195 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:55.195 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:55.195 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:55.195 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:55.195 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=366 00:11:55.195 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:55.195 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:55.195 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:55.195 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:55.195 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:55.195 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:55.195 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.195 11:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.195 11:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.195 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.195 11:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.195 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:55.195 "name": "raid_bdev1", 00:11:55.195 "uuid": "c12b4ba6-f798-4c9e-a85a-c128afff383d", 00:11:55.195 "strip_size_kb": 0, 00:11:55.195 "state": "online", 00:11:55.195 "raid_level": "raid1", 00:11:55.195 "superblock": true, 00:11:55.195 "num_base_bdevs": 4, 00:11:55.195 "num_base_bdevs_discovered": 3, 00:11:55.195 "num_base_bdevs_operational": 3, 00:11:55.195 "process": { 00:11:55.195 "type": "rebuild", 00:11:55.195 "target": "spare", 00:11:55.195 "progress": { 00:11:55.195 "blocks": 26624, 00:11:55.195 "percent": 41 00:11:55.195 } 00:11:55.195 }, 00:11:55.195 "base_bdevs_list": [ 00:11:55.195 { 00:11:55.195 "name": "spare", 00:11:55.195 "uuid": "b7d5cfec-ea19-553b-bef5-0f2ca583201b", 00:11:55.195 "is_configured": true, 00:11:55.195 "data_offset": 2048, 00:11:55.195 "data_size": 63488 00:11:55.195 }, 00:11:55.195 { 00:11:55.195 "name": null, 00:11:55.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.195 "is_configured": false, 00:11:55.195 "data_offset": 0, 00:11:55.195 "data_size": 63488 00:11:55.195 }, 00:11:55.196 { 00:11:55.196 "name": "BaseBdev3", 00:11:55.196 "uuid": "f6c1cacd-2e78-5206-a131-b588660c08b2", 00:11:55.196 "is_configured": true, 00:11:55.196 "data_offset": 2048, 00:11:55.196 "data_size": 63488 00:11:55.196 }, 00:11:55.196 { 00:11:55.196 "name": "BaseBdev4", 00:11:55.196 "uuid": "bff62fb2-0b51-595a-973e-09585d12b5b7", 00:11:55.196 "is_configured": true, 00:11:55.196 "data_offset": 2048, 00:11:55.196 "data_size": 63488 00:11:55.196 } 00:11:55.196 ] 00:11:55.196 }' 00:11:55.196 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:55.196 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:55.196 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:55.196 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:55.196 11:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:56.577 11:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:56.577 11:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:56.577 11:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:56.577 11:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:56.577 11:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:56.577 11:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:56.577 11:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.577 11:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.577 11:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.577 11:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.577 11:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.577 11:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:56.577 "name": "raid_bdev1", 00:11:56.577 "uuid": "c12b4ba6-f798-4c9e-a85a-c128afff383d", 00:11:56.577 "strip_size_kb": 0, 00:11:56.577 "state": "online", 00:11:56.577 "raid_level": "raid1", 00:11:56.577 "superblock": true, 00:11:56.577 "num_base_bdevs": 4, 00:11:56.577 "num_base_bdevs_discovered": 3, 00:11:56.577 "num_base_bdevs_operational": 3, 00:11:56.577 "process": { 00:11:56.577 "type": "rebuild", 00:11:56.577 "target": "spare", 00:11:56.577 "progress": { 00:11:56.577 "blocks": 49152, 00:11:56.577 "percent": 77 00:11:56.577 } 00:11:56.577 }, 00:11:56.577 "base_bdevs_list": [ 00:11:56.577 { 00:11:56.577 "name": "spare", 00:11:56.577 "uuid": "b7d5cfec-ea19-553b-bef5-0f2ca583201b", 00:11:56.577 "is_configured": true, 00:11:56.577 "data_offset": 2048, 00:11:56.577 "data_size": 63488 00:11:56.577 }, 00:11:56.577 { 00:11:56.577 "name": null, 00:11:56.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.577 "is_configured": false, 00:11:56.577 "data_offset": 0, 00:11:56.577 "data_size": 63488 00:11:56.577 }, 00:11:56.577 { 00:11:56.577 "name": "BaseBdev3", 00:11:56.577 "uuid": "f6c1cacd-2e78-5206-a131-b588660c08b2", 00:11:56.577 "is_configured": true, 00:11:56.577 "data_offset": 2048, 00:11:56.577 "data_size": 63488 00:11:56.577 }, 00:11:56.577 { 00:11:56.577 "name": "BaseBdev4", 00:11:56.577 "uuid": "bff62fb2-0b51-595a-973e-09585d12b5b7", 00:11:56.577 "is_configured": true, 00:11:56.577 "data_offset": 2048, 00:11:56.577 "data_size": 63488 00:11:56.577 } 00:11:56.577 ] 00:11:56.577 }' 00:11:56.577 11:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:56.577 11:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:56.577 11:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:56.577 11:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:56.577 11:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:56.836 [2024-12-06 11:54:54.567552] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:56.836 [2024-12-06 11:54:54.567688] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:56.836 [2024-12-06 11:54:54.567835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.407 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:57.407 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:57.407 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:57.407 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:57.407 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:57.408 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:57.408 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.408 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.408 11:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.408 11:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.408 11:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.408 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:57.408 "name": "raid_bdev1", 00:11:57.408 "uuid": "c12b4ba6-f798-4c9e-a85a-c128afff383d", 00:11:57.408 "strip_size_kb": 0, 00:11:57.408 "state": "online", 00:11:57.408 "raid_level": "raid1", 00:11:57.408 "superblock": true, 00:11:57.408 "num_base_bdevs": 4, 00:11:57.408 "num_base_bdevs_discovered": 3, 00:11:57.408 "num_base_bdevs_operational": 3, 00:11:57.408 "base_bdevs_list": [ 00:11:57.408 { 00:11:57.408 "name": "spare", 00:11:57.408 "uuid": "b7d5cfec-ea19-553b-bef5-0f2ca583201b", 00:11:57.408 "is_configured": true, 00:11:57.408 "data_offset": 2048, 00:11:57.408 "data_size": 63488 00:11:57.408 }, 00:11:57.408 { 00:11:57.408 "name": null, 00:11:57.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.408 "is_configured": false, 00:11:57.408 "data_offset": 0, 00:11:57.408 "data_size": 63488 00:11:57.408 }, 00:11:57.408 { 00:11:57.408 "name": "BaseBdev3", 00:11:57.408 "uuid": "f6c1cacd-2e78-5206-a131-b588660c08b2", 00:11:57.408 "is_configured": true, 00:11:57.408 "data_offset": 2048, 00:11:57.408 "data_size": 63488 00:11:57.408 }, 00:11:57.408 { 00:11:57.408 "name": "BaseBdev4", 00:11:57.408 "uuid": "bff62fb2-0b51-595a-973e-09585d12b5b7", 00:11:57.408 "is_configured": true, 00:11:57.408 "data_offset": 2048, 00:11:57.408 "data_size": 63488 00:11:57.408 } 00:11:57.408 ] 00:11:57.408 }' 00:11:57.408 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:57.408 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:57.408 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:57.667 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:57.667 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:11:57.667 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:57.667 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:57.667 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:57.667 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:57.667 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:57.667 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.667 11:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.667 11:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.667 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.667 11:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.667 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:57.667 "name": "raid_bdev1", 00:11:57.667 "uuid": "c12b4ba6-f798-4c9e-a85a-c128afff383d", 00:11:57.667 "strip_size_kb": 0, 00:11:57.667 "state": "online", 00:11:57.667 "raid_level": "raid1", 00:11:57.667 "superblock": true, 00:11:57.667 "num_base_bdevs": 4, 00:11:57.667 "num_base_bdevs_discovered": 3, 00:11:57.667 "num_base_bdevs_operational": 3, 00:11:57.667 "base_bdevs_list": [ 00:11:57.667 { 00:11:57.667 "name": "spare", 00:11:57.667 "uuid": "b7d5cfec-ea19-553b-bef5-0f2ca583201b", 00:11:57.667 "is_configured": true, 00:11:57.667 "data_offset": 2048, 00:11:57.667 "data_size": 63488 00:11:57.667 }, 00:11:57.667 { 00:11:57.667 "name": null, 00:11:57.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.667 "is_configured": false, 00:11:57.667 "data_offset": 0, 00:11:57.667 "data_size": 63488 00:11:57.667 }, 00:11:57.667 { 00:11:57.667 "name": "BaseBdev3", 00:11:57.667 "uuid": "f6c1cacd-2e78-5206-a131-b588660c08b2", 00:11:57.667 "is_configured": true, 00:11:57.667 "data_offset": 2048, 00:11:57.667 "data_size": 63488 00:11:57.667 }, 00:11:57.667 { 00:11:57.667 "name": "BaseBdev4", 00:11:57.667 "uuid": "bff62fb2-0b51-595a-973e-09585d12b5b7", 00:11:57.667 "is_configured": true, 00:11:57.667 "data_offset": 2048, 00:11:57.667 "data_size": 63488 00:11:57.667 } 00:11:57.667 ] 00:11:57.667 }' 00:11:57.667 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:57.667 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:57.667 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:57.667 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:57.667 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:57.667 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.668 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.668 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.668 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.668 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.668 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.668 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.668 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.668 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.668 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.668 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.668 11:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.668 11:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.668 11:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.668 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.668 "name": "raid_bdev1", 00:11:57.668 "uuid": "c12b4ba6-f798-4c9e-a85a-c128afff383d", 00:11:57.668 "strip_size_kb": 0, 00:11:57.668 "state": "online", 00:11:57.668 "raid_level": "raid1", 00:11:57.668 "superblock": true, 00:11:57.668 "num_base_bdevs": 4, 00:11:57.668 "num_base_bdevs_discovered": 3, 00:11:57.668 "num_base_bdevs_operational": 3, 00:11:57.668 "base_bdevs_list": [ 00:11:57.668 { 00:11:57.668 "name": "spare", 00:11:57.668 "uuid": "b7d5cfec-ea19-553b-bef5-0f2ca583201b", 00:11:57.668 "is_configured": true, 00:11:57.668 "data_offset": 2048, 00:11:57.668 "data_size": 63488 00:11:57.668 }, 00:11:57.668 { 00:11:57.668 "name": null, 00:11:57.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.668 "is_configured": false, 00:11:57.668 "data_offset": 0, 00:11:57.668 "data_size": 63488 00:11:57.668 }, 00:11:57.668 { 00:11:57.668 "name": "BaseBdev3", 00:11:57.668 "uuid": "f6c1cacd-2e78-5206-a131-b588660c08b2", 00:11:57.668 "is_configured": true, 00:11:57.668 "data_offset": 2048, 00:11:57.668 "data_size": 63488 00:11:57.668 }, 00:11:57.668 { 00:11:57.668 "name": "BaseBdev4", 00:11:57.668 "uuid": "bff62fb2-0b51-595a-973e-09585d12b5b7", 00:11:57.668 "is_configured": true, 00:11:57.668 "data_offset": 2048, 00:11:57.668 "data_size": 63488 00:11:57.668 } 00:11:57.668 ] 00:11:57.668 }' 00:11:57.668 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.668 11:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.236 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:58.236 11:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.236 11:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.236 [2024-12-06 11:54:55.761168] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:58.236 [2024-12-06 11:54:55.761277] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:58.236 [2024-12-06 11:54:55.761432] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:58.236 [2024-12-06 11:54:55.761584] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:58.236 [2024-12-06 11:54:55.761662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:58.236 11:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.236 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:11:58.236 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.236 11:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.236 11:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.236 11:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.236 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:58.236 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:58.236 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:58.236 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:58.236 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:58.236 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:58.236 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:58.236 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:58.236 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:58.236 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:58.236 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:58.236 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:58.236 11:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:58.496 /dev/nbd0 00:11:58.496 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:58.496 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:58.496 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:58.496 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:58.496 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:58.496 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:58.496 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:58.496 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:58.496 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:58.496 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:58.496 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:58.496 1+0 records in 00:11:58.496 1+0 records out 00:11:58.496 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357543 s, 11.5 MB/s 00:11:58.496 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.496 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:58.496 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.496 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:58.496 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:58.496 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:58.496 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:58.496 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:58.756 /dev/nbd1 00:11:58.756 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:58.756 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:58.756 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:58.756 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:58.756 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:58.756 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:58.756 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:58.756 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:58.756 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:58.756 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:58.756 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:58.756 1+0 records in 00:11:58.756 1+0 records out 00:11:58.756 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269006 s, 15.2 MB/s 00:11:58.756 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.756 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:58.756 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.756 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:58.756 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:58.756 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:58.756 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:58.756 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:58.756 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:58.756 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:58.756 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:58.756 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:58.756 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:58.756 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:58.756 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.015 [2024-12-06 11:54:56.763395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:59.015 [2024-12-06 11:54:56.763502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.015 [2024-12-06 11:54:56.763553] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:59.015 [2024-12-06 11:54:56.763605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.015 [2024-12-06 11:54:56.765791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.015 [2024-12-06 11:54:56.765883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:59.015 [2024-12-06 11:54:56.766044] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:59.015 [2024-12-06 11:54:56.766135] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:59.015 [2024-12-06 11:54:56.766339] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:59.015 [2024-12-06 11:54:56.766484] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:59.015 spare 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.015 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.274 [2024-12-06 11:54:56.866403] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:11:59.274 [2024-12-06 11:54:56.866465] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:59.274 [2024-12-06 11:54:56.866778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeb00 00:11:59.274 [2024-12-06 11:54:56.866945] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:11:59.274 [2024-12-06 11:54:56.866990] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:11:59.274 [2024-12-06 11:54:56.867172] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.274 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.274 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:59.274 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.274 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.274 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.274 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.274 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:59.274 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.274 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.274 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.274 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.274 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.274 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.274 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.274 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.274 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.274 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.274 "name": "raid_bdev1", 00:11:59.274 "uuid": "c12b4ba6-f798-4c9e-a85a-c128afff383d", 00:11:59.274 "strip_size_kb": 0, 00:11:59.274 "state": "online", 00:11:59.274 "raid_level": "raid1", 00:11:59.274 "superblock": true, 00:11:59.274 "num_base_bdevs": 4, 00:11:59.274 "num_base_bdevs_discovered": 3, 00:11:59.274 "num_base_bdevs_operational": 3, 00:11:59.274 "base_bdevs_list": [ 00:11:59.274 { 00:11:59.274 "name": "spare", 00:11:59.274 "uuid": "b7d5cfec-ea19-553b-bef5-0f2ca583201b", 00:11:59.274 "is_configured": true, 00:11:59.274 "data_offset": 2048, 00:11:59.274 "data_size": 63488 00:11:59.274 }, 00:11:59.274 { 00:11:59.274 "name": null, 00:11:59.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.274 "is_configured": false, 00:11:59.274 "data_offset": 2048, 00:11:59.274 "data_size": 63488 00:11:59.274 }, 00:11:59.274 { 00:11:59.274 "name": "BaseBdev3", 00:11:59.274 "uuid": "f6c1cacd-2e78-5206-a131-b588660c08b2", 00:11:59.274 "is_configured": true, 00:11:59.274 "data_offset": 2048, 00:11:59.274 "data_size": 63488 00:11:59.274 }, 00:11:59.274 { 00:11:59.274 "name": "BaseBdev4", 00:11:59.274 "uuid": "bff62fb2-0b51-595a-973e-09585d12b5b7", 00:11:59.274 "is_configured": true, 00:11:59.274 "data_offset": 2048, 00:11:59.274 "data_size": 63488 00:11:59.274 } 00:11:59.274 ] 00:11:59.274 }' 00:11:59.274 11:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.274 11:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.542 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:59.542 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:59.542 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:59.542 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:59.542 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:59.542 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.542 11:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.542 11:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.542 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.802 11:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.802 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:59.802 "name": "raid_bdev1", 00:11:59.802 "uuid": "c12b4ba6-f798-4c9e-a85a-c128afff383d", 00:11:59.802 "strip_size_kb": 0, 00:11:59.802 "state": "online", 00:11:59.802 "raid_level": "raid1", 00:11:59.802 "superblock": true, 00:11:59.802 "num_base_bdevs": 4, 00:11:59.802 "num_base_bdevs_discovered": 3, 00:11:59.802 "num_base_bdevs_operational": 3, 00:11:59.802 "base_bdevs_list": [ 00:11:59.802 { 00:11:59.802 "name": "spare", 00:11:59.802 "uuid": "b7d5cfec-ea19-553b-bef5-0f2ca583201b", 00:11:59.802 "is_configured": true, 00:11:59.802 "data_offset": 2048, 00:11:59.802 "data_size": 63488 00:11:59.802 }, 00:11:59.802 { 00:11:59.802 "name": null, 00:11:59.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.802 "is_configured": false, 00:11:59.802 "data_offset": 2048, 00:11:59.802 "data_size": 63488 00:11:59.802 }, 00:11:59.802 { 00:11:59.802 "name": "BaseBdev3", 00:11:59.802 "uuid": "f6c1cacd-2e78-5206-a131-b588660c08b2", 00:11:59.802 "is_configured": true, 00:11:59.802 "data_offset": 2048, 00:11:59.802 "data_size": 63488 00:11:59.802 }, 00:11:59.802 { 00:11:59.802 "name": "BaseBdev4", 00:11:59.802 "uuid": "bff62fb2-0b51-595a-973e-09585d12b5b7", 00:11:59.802 "is_configured": true, 00:11:59.802 "data_offset": 2048, 00:11:59.802 "data_size": 63488 00:11:59.802 } 00:11:59.802 ] 00:11:59.802 }' 00:11:59.802 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:59.802 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:59.802 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:59.802 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:59.802 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.802 11:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.802 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:59.802 11:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.802 11:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.802 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:59.802 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:59.802 11:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.802 11:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.802 [2024-12-06 11:54:57.438318] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:59.802 11:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.802 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:59.802 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.802 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.802 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.803 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.803 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:59.803 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.803 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.803 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.803 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.803 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.803 11:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.803 11:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.803 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.803 11:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.803 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.803 "name": "raid_bdev1", 00:11:59.803 "uuid": "c12b4ba6-f798-4c9e-a85a-c128afff383d", 00:11:59.803 "strip_size_kb": 0, 00:11:59.803 "state": "online", 00:11:59.803 "raid_level": "raid1", 00:11:59.803 "superblock": true, 00:11:59.803 "num_base_bdevs": 4, 00:11:59.803 "num_base_bdevs_discovered": 2, 00:11:59.803 "num_base_bdevs_operational": 2, 00:11:59.803 "base_bdevs_list": [ 00:11:59.803 { 00:11:59.803 "name": null, 00:11:59.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.803 "is_configured": false, 00:11:59.803 "data_offset": 0, 00:11:59.803 "data_size": 63488 00:11:59.803 }, 00:11:59.803 { 00:11:59.803 "name": null, 00:11:59.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.803 "is_configured": false, 00:11:59.803 "data_offset": 2048, 00:11:59.803 "data_size": 63488 00:11:59.803 }, 00:11:59.803 { 00:11:59.803 "name": "BaseBdev3", 00:11:59.803 "uuid": "f6c1cacd-2e78-5206-a131-b588660c08b2", 00:11:59.803 "is_configured": true, 00:11:59.803 "data_offset": 2048, 00:11:59.803 "data_size": 63488 00:11:59.803 }, 00:11:59.803 { 00:11:59.803 "name": "BaseBdev4", 00:11:59.803 "uuid": "bff62fb2-0b51-595a-973e-09585d12b5b7", 00:11:59.803 "is_configured": true, 00:11:59.803 "data_offset": 2048, 00:11:59.803 "data_size": 63488 00:11:59.803 } 00:11:59.803 ] 00:11:59.803 }' 00:11:59.803 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.803 11:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.369 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:00.369 11:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.369 11:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.369 [2024-12-06 11:54:57.857597] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:00.369 [2024-12-06 11:54:57.857824] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:00.369 [2024-12-06 11:54:57.857895] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:00.369 [2024-12-06 11:54:57.857966] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:00.369 [2024-12-06 11:54:57.861278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caebd0 00:12:00.369 [2024-12-06 11:54:57.863124] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:00.369 11:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.369 11:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:01.304 11:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:01.304 11:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:01.304 11:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:01.304 11:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:01.304 11:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:01.304 11:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.304 11:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.304 11:54:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.304 11:54:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.304 11:54:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.304 11:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:01.304 "name": "raid_bdev1", 00:12:01.304 "uuid": "c12b4ba6-f798-4c9e-a85a-c128afff383d", 00:12:01.304 "strip_size_kb": 0, 00:12:01.304 "state": "online", 00:12:01.304 "raid_level": "raid1", 00:12:01.304 "superblock": true, 00:12:01.304 "num_base_bdevs": 4, 00:12:01.304 "num_base_bdevs_discovered": 3, 00:12:01.304 "num_base_bdevs_operational": 3, 00:12:01.304 "process": { 00:12:01.304 "type": "rebuild", 00:12:01.304 "target": "spare", 00:12:01.304 "progress": { 00:12:01.304 "blocks": 20480, 00:12:01.304 "percent": 32 00:12:01.304 } 00:12:01.304 }, 00:12:01.304 "base_bdevs_list": [ 00:12:01.304 { 00:12:01.304 "name": "spare", 00:12:01.304 "uuid": "b7d5cfec-ea19-553b-bef5-0f2ca583201b", 00:12:01.304 "is_configured": true, 00:12:01.304 "data_offset": 2048, 00:12:01.304 "data_size": 63488 00:12:01.304 }, 00:12:01.304 { 00:12:01.304 "name": null, 00:12:01.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.304 "is_configured": false, 00:12:01.304 "data_offset": 2048, 00:12:01.304 "data_size": 63488 00:12:01.304 }, 00:12:01.304 { 00:12:01.304 "name": "BaseBdev3", 00:12:01.304 "uuid": "f6c1cacd-2e78-5206-a131-b588660c08b2", 00:12:01.304 "is_configured": true, 00:12:01.304 "data_offset": 2048, 00:12:01.304 "data_size": 63488 00:12:01.304 }, 00:12:01.304 { 00:12:01.304 "name": "BaseBdev4", 00:12:01.304 "uuid": "bff62fb2-0b51-595a-973e-09585d12b5b7", 00:12:01.304 "is_configured": true, 00:12:01.304 "data_offset": 2048, 00:12:01.304 "data_size": 63488 00:12:01.304 } 00:12:01.304 ] 00:12:01.304 }' 00:12:01.304 11:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:01.304 11:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:01.304 11:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:01.304 11:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:01.304 11:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:01.304 11:54:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.304 11:54:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.304 [2024-12-06 11:54:58.997881] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:01.563 [2024-12-06 11:54:59.067065] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:01.563 [2024-12-06 11:54:59.067129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.563 [2024-12-06 11:54:59.067149] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:01.563 [2024-12-06 11:54:59.067161] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:01.563 11:54:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.563 11:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:01.563 11:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.563 11:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.563 11:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.563 11:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.563 11:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:01.563 11:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.563 11:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.563 11:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.563 11:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.563 11:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.563 11:54:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.563 11:54:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.564 11:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.564 11:54:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.564 11:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.564 "name": "raid_bdev1", 00:12:01.564 "uuid": "c12b4ba6-f798-4c9e-a85a-c128afff383d", 00:12:01.564 "strip_size_kb": 0, 00:12:01.564 "state": "online", 00:12:01.564 "raid_level": "raid1", 00:12:01.564 "superblock": true, 00:12:01.564 "num_base_bdevs": 4, 00:12:01.564 "num_base_bdevs_discovered": 2, 00:12:01.564 "num_base_bdevs_operational": 2, 00:12:01.564 "base_bdevs_list": [ 00:12:01.564 { 00:12:01.564 "name": null, 00:12:01.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.564 "is_configured": false, 00:12:01.564 "data_offset": 0, 00:12:01.564 "data_size": 63488 00:12:01.564 }, 00:12:01.564 { 00:12:01.564 "name": null, 00:12:01.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.564 "is_configured": false, 00:12:01.564 "data_offset": 2048, 00:12:01.564 "data_size": 63488 00:12:01.564 }, 00:12:01.564 { 00:12:01.564 "name": "BaseBdev3", 00:12:01.564 "uuid": "f6c1cacd-2e78-5206-a131-b588660c08b2", 00:12:01.564 "is_configured": true, 00:12:01.564 "data_offset": 2048, 00:12:01.564 "data_size": 63488 00:12:01.564 }, 00:12:01.564 { 00:12:01.564 "name": "BaseBdev4", 00:12:01.564 "uuid": "bff62fb2-0b51-595a-973e-09585d12b5b7", 00:12:01.564 "is_configured": true, 00:12:01.564 "data_offset": 2048, 00:12:01.564 "data_size": 63488 00:12:01.564 } 00:12:01.564 ] 00:12:01.564 }' 00:12:01.564 11:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.564 11:54:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.823 11:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:01.823 11:54:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.823 11:54:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.823 [2024-12-06 11:54:59.522005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:01.823 [2024-12-06 11:54:59.522124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.823 [2024-12-06 11:54:59.522181] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:12:01.823 [2024-12-06 11:54:59.522227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.823 [2024-12-06 11:54:59.522719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.823 [2024-12-06 11:54:59.522794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:01.823 [2024-12-06 11:54:59.522950] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:01.823 [2024-12-06 11:54:59.523006] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:01.823 [2024-12-06 11:54:59.523050] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:01.823 [2024-12-06 11:54:59.523117] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:01.823 [2024-12-06 11:54:59.526315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeca0 00:12:01.823 spare 00:12:01.823 11:54:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.823 11:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:01.823 [2024-12-06 11:54:59.528180] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:03.206 11:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:03.206 11:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:03.206 11:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:03.206 11:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:03.206 11:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:03.206 11:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.206 11:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.206 11:55:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.206 11:55:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.206 11:55:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.206 11:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:03.206 "name": "raid_bdev1", 00:12:03.206 "uuid": "c12b4ba6-f798-4c9e-a85a-c128afff383d", 00:12:03.206 "strip_size_kb": 0, 00:12:03.206 "state": "online", 00:12:03.206 "raid_level": "raid1", 00:12:03.206 "superblock": true, 00:12:03.206 "num_base_bdevs": 4, 00:12:03.206 "num_base_bdevs_discovered": 3, 00:12:03.206 "num_base_bdevs_operational": 3, 00:12:03.206 "process": { 00:12:03.206 "type": "rebuild", 00:12:03.206 "target": "spare", 00:12:03.206 "progress": { 00:12:03.206 "blocks": 20480, 00:12:03.206 "percent": 32 00:12:03.206 } 00:12:03.206 }, 00:12:03.206 "base_bdevs_list": [ 00:12:03.206 { 00:12:03.206 "name": "spare", 00:12:03.206 "uuid": "b7d5cfec-ea19-553b-bef5-0f2ca583201b", 00:12:03.206 "is_configured": true, 00:12:03.206 "data_offset": 2048, 00:12:03.206 "data_size": 63488 00:12:03.206 }, 00:12:03.206 { 00:12:03.206 "name": null, 00:12:03.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.206 "is_configured": false, 00:12:03.206 "data_offset": 2048, 00:12:03.206 "data_size": 63488 00:12:03.206 }, 00:12:03.206 { 00:12:03.206 "name": "BaseBdev3", 00:12:03.206 "uuid": "f6c1cacd-2e78-5206-a131-b588660c08b2", 00:12:03.206 "is_configured": true, 00:12:03.206 "data_offset": 2048, 00:12:03.206 "data_size": 63488 00:12:03.206 }, 00:12:03.206 { 00:12:03.206 "name": "BaseBdev4", 00:12:03.206 "uuid": "bff62fb2-0b51-595a-973e-09585d12b5b7", 00:12:03.206 "is_configured": true, 00:12:03.206 "data_offset": 2048, 00:12:03.206 "data_size": 63488 00:12:03.206 } 00:12:03.206 ] 00:12:03.206 }' 00:12:03.206 11:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:03.206 11:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:03.206 11:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:03.206 11:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:03.206 11:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:03.206 11:55:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.206 11:55:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.206 [2024-12-06 11:55:00.677047] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:03.206 [2024-12-06 11:55:00.732118] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:03.206 [2024-12-06 11:55:00.732176] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.206 [2024-12-06 11:55:00.732200] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:03.206 [2024-12-06 11:55:00.732209] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:03.206 11:55:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.207 11:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:03.207 11:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.207 11:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.207 11:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.207 11:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.207 11:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:03.207 11:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.207 11:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.207 11:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.207 11:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.207 11:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.207 11:55:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.207 11:55:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.207 11:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.207 11:55:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.207 11:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.207 "name": "raid_bdev1", 00:12:03.207 "uuid": "c12b4ba6-f798-4c9e-a85a-c128afff383d", 00:12:03.207 "strip_size_kb": 0, 00:12:03.207 "state": "online", 00:12:03.207 "raid_level": "raid1", 00:12:03.207 "superblock": true, 00:12:03.207 "num_base_bdevs": 4, 00:12:03.207 "num_base_bdevs_discovered": 2, 00:12:03.207 "num_base_bdevs_operational": 2, 00:12:03.207 "base_bdevs_list": [ 00:12:03.207 { 00:12:03.207 "name": null, 00:12:03.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.207 "is_configured": false, 00:12:03.207 "data_offset": 0, 00:12:03.207 "data_size": 63488 00:12:03.207 }, 00:12:03.207 { 00:12:03.207 "name": null, 00:12:03.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.207 "is_configured": false, 00:12:03.207 "data_offset": 2048, 00:12:03.207 "data_size": 63488 00:12:03.207 }, 00:12:03.207 { 00:12:03.207 "name": "BaseBdev3", 00:12:03.207 "uuid": "f6c1cacd-2e78-5206-a131-b588660c08b2", 00:12:03.207 "is_configured": true, 00:12:03.207 "data_offset": 2048, 00:12:03.207 "data_size": 63488 00:12:03.207 }, 00:12:03.207 { 00:12:03.207 "name": "BaseBdev4", 00:12:03.207 "uuid": "bff62fb2-0b51-595a-973e-09585d12b5b7", 00:12:03.207 "is_configured": true, 00:12:03.207 "data_offset": 2048, 00:12:03.207 "data_size": 63488 00:12:03.207 } 00:12:03.207 ] 00:12:03.207 }' 00:12:03.207 11:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.207 11:55:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.467 11:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:03.467 11:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:03.467 11:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:03.467 11:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:03.467 11:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:03.467 11:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.467 11:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.467 11:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.467 11:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.467 11:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.467 11:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:03.467 "name": "raid_bdev1", 00:12:03.467 "uuid": "c12b4ba6-f798-4c9e-a85a-c128afff383d", 00:12:03.468 "strip_size_kb": 0, 00:12:03.468 "state": "online", 00:12:03.468 "raid_level": "raid1", 00:12:03.468 "superblock": true, 00:12:03.468 "num_base_bdevs": 4, 00:12:03.468 "num_base_bdevs_discovered": 2, 00:12:03.468 "num_base_bdevs_operational": 2, 00:12:03.468 "base_bdevs_list": [ 00:12:03.468 { 00:12:03.468 "name": null, 00:12:03.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.468 "is_configured": false, 00:12:03.468 "data_offset": 0, 00:12:03.468 "data_size": 63488 00:12:03.468 }, 00:12:03.468 { 00:12:03.468 "name": null, 00:12:03.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.468 "is_configured": false, 00:12:03.468 "data_offset": 2048, 00:12:03.468 "data_size": 63488 00:12:03.468 }, 00:12:03.468 { 00:12:03.468 "name": "BaseBdev3", 00:12:03.468 "uuid": "f6c1cacd-2e78-5206-a131-b588660c08b2", 00:12:03.468 "is_configured": true, 00:12:03.468 "data_offset": 2048, 00:12:03.468 "data_size": 63488 00:12:03.468 }, 00:12:03.468 { 00:12:03.468 "name": "BaseBdev4", 00:12:03.468 "uuid": "bff62fb2-0b51-595a-973e-09585d12b5b7", 00:12:03.468 "is_configured": true, 00:12:03.468 "data_offset": 2048, 00:12:03.468 "data_size": 63488 00:12:03.468 } 00:12:03.468 ] 00:12:03.468 }' 00:12:03.468 11:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:03.727 11:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:03.727 11:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:03.727 11:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:03.727 11:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:03.727 11:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.727 11:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.727 11:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.727 11:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:03.727 11:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.727 11:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.727 [2024-12-06 11:55:01.319014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:03.727 [2024-12-06 11:55:01.319123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.727 [2024-12-06 11:55:01.319163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:03.727 [2024-12-06 11:55:01.319175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.727 [2024-12-06 11:55:01.319670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.727 [2024-12-06 11:55:01.319699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:03.727 [2024-12-06 11:55:01.319799] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:03.727 [2024-12-06 11:55:01.319818] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:03.727 [2024-12-06 11:55:01.319831] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:03.727 [2024-12-06 11:55:01.319845] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:03.727 BaseBdev1 00:12:03.727 11:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.727 11:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:04.666 11:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:04.666 11:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.666 11:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.666 11:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.666 11:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.666 11:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:04.666 11:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.666 11:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.666 11:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.666 11:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.666 11:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.666 11:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.666 11:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.666 11:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.666 11:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.666 11:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.666 "name": "raid_bdev1", 00:12:04.666 "uuid": "c12b4ba6-f798-4c9e-a85a-c128afff383d", 00:12:04.666 "strip_size_kb": 0, 00:12:04.667 "state": "online", 00:12:04.667 "raid_level": "raid1", 00:12:04.667 "superblock": true, 00:12:04.667 "num_base_bdevs": 4, 00:12:04.667 "num_base_bdevs_discovered": 2, 00:12:04.667 "num_base_bdevs_operational": 2, 00:12:04.667 "base_bdevs_list": [ 00:12:04.667 { 00:12:04.667 "name": null, 00:12:04.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.667 "is_configured": false, 00:12:04.667 "data_offset": 0, 00:12:04.667 "data_size": 63488 00:12:04.667 }, 00:12:04.667 { 00:12:04.667 "name": null, 00:12:04.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.667 "is_configured": false, 00:12:04.667 "data_offset": 2048, 00:12:04.667 "data_size": 63488 00:12:04.667 }, 00:12:04.667 { 00:12:04.667 "name": "BaseBdev3", 00:12:04.667 "uuid": "f6c1cacd-2e78-5206-a131-b588660c08b2", 00:12:04.667 "is_configured": true, 00:12:04.667 "data_offset": 2048, 00:12:04.667 "data_size": 63488 00:12:04.667 }, 00:12:04.667 { 00:12:04.667 "name": "BaseBdev4", 00:12:04.667 "uuid": "bff62fb2-0b51-595a-973e-09585d12b5b7", 00:12:04.667 "is_configured": true, 00:12:04.667 "data_offset": 2048, 00:12:04.667 "data_size": 63488 00:12:04.667 } 00:12:04.667 ] 00:12:04.667 }' 00:12:04.667 11:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.667 11:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:05.244 "name": "raid_bdev1", 00:12:05.244 "uuid": "c12b4ba6-f798-4c9e-a85a-c128afff383d", 00:12:05.244 "strip_size_kb": 0, 00:12:05.244 "state": "online", 00:12:05.244 "raid_level": "raid1", 00:12:05.244 "superblock": true, 00:12:05.244 "num_base_bdevs": 4, 00:12:05.244 "num_base_bdevs_discovered": 2, 00:12:05.244 "num_base_bdevs_operational": 2, 00:12:05.244 "base_bdevs_list": [ 00:12:05.244 { 00:12:05.244 "name": null, 00:12:05.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.244 "is_configured": false, 00:12:05.244 "data_offset": 0, 00:12:05.244 "data_size": 63488 00:12:05.244 }, 00:12:05.244 { 00:12:05.244 "name": null, 00:12:05.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.244 "is_configured": false, 00:12:05.244 "data_offset": 2048, 00:12:05.244 "data_size": 63488 00:12:05.244 }, 00:12:05.244 { 00:12:05.244 "name": "BaseBdev3", 00:12:05.244 "uuid": "f6c1cacd-2e78-5206-a131-b588660c08b2", 00:12:05.244 "is_configured": true, 00:12:05.244 "data_offset": 2048, 00:12:05.244 "data_size": 63488 00:12:05.244 }, 00:12:05.244 { 00:12:05.244 "name": "BaseBdev4", 00:12:05.244 "uuid": "bff62fb2-0b51-595a-973e-09585d12b5b7", 00:12:05.244 "is_configured": true, 00:12:05.244 "data_offset": 2048, 00:12:05.244 "data_size": 63488 00:12:05.244 } 00:12:05.244 ] 00:12:05.244 }' 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.244 [2024-12-06 11:55:02.868372] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:05.244 [2024-12-06 11:55:02.868522] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:05.244 [2024-12-06 11:55:02.868542] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:05.244 request: 00:12:05.244 { 00:12:05.244 "base_bdev": "BaseBdev1", 00:12:05.244 "raid_bdev": "raid_bdev1", 00:12:05.244 "method": "bdev_raid_add_base_bdev", 00:12:05.244 "req_id": 1 00:12:05.244 } 00:12:05.244 Got JSON-RPC error response 00:12:05.244 response: 00:12:05.244 { 00:12:05.244 "code": -22, 00:12:05.244 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:05.244 } 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:05.244 11:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:06.244 11:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:06.244 11:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.244 11:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.244 11:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.244 11:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.244 11:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:06.244 11:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.244 11:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.244 11:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.244 11:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.244 11:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.244 11:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.244 11:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.244 11:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.244 11:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.244 11:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.244 "name": "raid_bdev1", 00:12:06.244 "uuid": "c12b4ba6-f798-4c9e-a85a-c128afff383d", 00:12:06.244 "strip_size_kb": 0, 00:12:06.244 "state": "online", 00:12:06.244 "raid_level": "raid1", 00:12:06.244 "superblock": true, 00:12:06.244 "num_base_bdevs": 4, 00:12:06.244 "num_base_bdevs_discovered": 2, 00:12:06.244 "num_base_bdevs_operational": 2, 00:12:06.244 "base_bdevs_list": [ 00:12:06.244 { 00:12:06.244 "name": null, 00:12:06.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.244 "is_configured": false, 00:12:06.244 "data_offset": 0, 00:12:06.244 "data_size": 63488 00:12:06.244 }, 00:12:06.244 { 00:12:06.244 "name": null, 00:12:06.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.244 "is_configured": false, 00:12:06.244 "data_offset": 2048, 00:12:06.244 "data_size": 63488 00:12:06.244 }, 00:12:06.244 { 00:12:06.244 "name": "BaseBdev3", 00:12:06.244 "uuid": "f6c1cacd-2e78-5206-a131-b588660c08b2", 00:12:06.244 "is_configured": true, 00:12:06.244 "data_offset": 2048, 00:12:06.244 "data_size": 63488 00:12:06.244 }, 00:12:06.244 { 00:12:06.244 "name": "BaseBdev4", 00:12:06.244 "uuid": "bff62fb2-0b51-595a-973e-09585d12b5b7", 00:12:06.244 "is_configured": true, 00:12:06.244 "data_offset": 2048, 00:12:06.244 "data_size": 63488 00:12:06.244 } 00:12:06.244 ] 00:12:06.244 }' 00:12:06.244 11:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.244 11:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.813 11:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:06.813 11:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:06.813 11:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:06.813 11:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:06.813 11:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:06.813 11:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.813 11:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.813 11:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.814 11:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.814 11:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.814 11:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:06.814 "name": "raid_bdev1", 00:12:06.814 "uuid": "c12b4ba6-f798-4c9e-a85a-c128afff383d", 00:12:06.814 "strip_size_kb": 0, 00:12:06.814 "state": "online", 00:12:06.814 "raid_level": "raid1", 00:12:06.814 "superblock": true, 00:12:06.814 "num_base_bdevs": 4, 00:12:06.814 "num_base_bdevs_discovered": 2, 00:12:06.814 "num_base_bdevs_operational": 2, 00:12:06.814 "base_bdevs_list": [ 00:12:06.814 { 00:12:06.814 "name": null, 00:12:06.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.814 "is_configured": false, 00:12:06.814 "data_offset": 0, 00:12:06.814 "data_size": 63488 00:12:06.814 }, 00:12:06.814 { 00:12:06.814 "name": null, 00:12:06.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.814 "is_configured": false, 00:12:06.814 "data_offset": 2048, 00:12:06.814 "data_size": 63488 00:12:06.814 }, 00:12:06.814 { 00:12:06.814 "name": "BaseBdev3", 00:12:06.814 "uuid": "f6c1cacd-2e78-5206-a131-b588660c08b2", 00:12:06.814 "is_configured": true, 00:12:06.814 "data_offset": 2048, 00:12:06.814 "data_size": 63488 00:12:06.814 }, 00:12:06.814 { 00:12:06.814 "name": "BaseBdev4", 00:12:06.814 "uuid": "bff62fb2-0b51-595a-973e-09585d12b5b7", 00:12:06.814 "is_configured": true, 00:12:06.814 "data_offset": 2048, 00:12:06.814 "data_size": 63488 00:12:06.814 } 00:12:06.814 ] 00:12:06.814 }' 00:12:06.814 11:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:06.814 11:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:06.814 11:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:06.814 11:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:06.814 11:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 88223 00:12:06.814 11:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 88223 ']' 00:12:06.814 11:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 88223 00:12:06.814 11:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:06.814 11:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:06.814 11:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88223 00:12:06.814 11:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:06.814 11:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:06.814 killing process with pid 88223 00:12:06.814 11:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88223' 00:12:06.814 11:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 88223 00:12:06.814 Received shutdown signal, test time was about 60.000000 seconds 00:12:06.814 00:12:06.814 Latency(us) 00:12:06.814 [2024-12-06T11:55:04.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:06.814 [2024-12-06T11:55:04.572Z] =================================================================================================================== 00:12:06.814 [2024-12-06T11:55:04.572Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:06.814 [2024-12-06 11:55:04.411020] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:06.814 [2024-12-06 11:55:04.411149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:06.814 11:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 88223 00:12:06.814 [2024-12-06 11:55:04.411226] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:06.814 [2024-12-06 11:55:04.411255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:12:06.814 [2024-12-06 11:55:04.462457] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:07.074 00:12:07.074 real 0m22.685s 00:12:07.074 user 0m27.536s 00:12:07.074 sys 0m3.586s 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.074 ************************************ 00:12:07.074 END TEST raid_rebuild_test_sb 00:12:07.074 ************************************ 00:12:07.074 11:55:04 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:12:07.074 11:55:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:07.074 11:55:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.074 11:55:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:07.074 ************************************ 00:12:07.074 START TEST raid_rebuild_test_io 00:12:07.074 ************************************ 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=88953 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 88953 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 88953 ']' 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:07.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:07.074 11:55:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.334 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:07.334 Zero copy mechanism will not be used. 00:12:07.334 [2024-12-06 11:55:04.860128] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:07.334 [2024-12-06 11:55:04.860252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88953 ] 00:12:07.334 [2024-12-06 11:55:04.983951] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.334 [2024-12-06 11:55:05.026421] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.334 [2024-12-06 11:55:05.067938] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:07.334 [2024-12-06 11:55:05.067977] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.271 BaseBdev1_malloc 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.271 [2024-12-06 11:55:05.698007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:08.271 [2024-12-06 11:55:05.698092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.271 [2024-12-06 11:55:05.698129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:08.271 [2024-12-06 11:55:05.698148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.271 [2024-12-06 11:55:05.700267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.271 [2024-12-06 11:55:05.700307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:08.271 BaseBdev1 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.271 BaseBdev2_malloc 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.271 [2024-12-06 11:55:05.734669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:08.271 [2024-12-06 11:55:05.734730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.271 [2024-12-06 11:55:05.734762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:08.271 [2024-12-06 11:55:05.734776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.271 [2024-12-06 11:55:05.737196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.271 [2024-12-06 11:55:05.737275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:08.271 BaseBdev2 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.271 BaseBdev3_malloc 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.271 [2024-12-06 11:55:05.763222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:08.271 [2024-12-06 11:55:05.763293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.271 [2024-12-06 11:55:05.763355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:08.271 [2024-12-06 11:55:05.763368] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.271 [2024-12-06 11:55:05.765369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.271 [2024-12-06 11:55:05.765403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:08.271 BaseBdev3 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.271 BaseBdev4_malloc 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.271 [2024-12-06 11:55:05.791683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:08.271 [2024-12-06 11:55:05.791752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.271 [2024-12-06 11:55:05.791780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:08.271 [2024-12-06 11:55:05.791792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.271 [2024-12-06 11:55:05.793831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.271 [2024-12-06 11:55:05.793866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:08.271 BaseBdev4 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.271 spare_malloc 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.271 spare_delay 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.271 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.271 [2024-12-06 11:55:05.832074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:08.271 [2024-12-06 11:55:05.832124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.271 [2024-12-06 11:55:05.832148] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:08.271 [2024-12-06 11:55:05.832160] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.272 [2024-12-06 11:55:05.834209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.272 [2024-12-06 11:55:05.834268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:08.272 spare 00:12:08.272 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.272 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:08.272 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.272 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.272 [2024-12-06 11:55:05.844113] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:08.272 [2024-12-06 11:55:05.845914] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:08.272 [2024-12-06 11:55:05.845992] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:08.272 [2024-12-06 11:55:05.846038] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:08.272 [2024-12-06 11:55:05.846114] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:08.272 [2024-12-06 11:55:05.846123] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:08.272 [2024-12-06 11:55:05.846394] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:12:08.272 [2024-12-06 11:55:05.846523] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:08.272 [2024-12-06 11:55:05.846548] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:08.272 [2024-12-06 11:55:05.846671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.272 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.272 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:08.272 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.272 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.272 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.272 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.272 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.272 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.272 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.272 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.272 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.272 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.272 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.272 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.272 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.272 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.272 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.272 "name": "raid_bdev1", 00:12:08.272 "uuid": "76254418-8c60-4f98-8924-fd93fb760717", 00:12:08.272 "strip_size_kb": 0, 00:12:08.272 "state": "online", 00:12:08.272 "raid_level": "raid1", 00:12:08.272 "superblock": false, 00:12:08.272 "num_base_bdevs": 4, 00:12:08.272 "num_base_bdevs_discovered": 4, 00:12:08.272 "num_base_bdevs_operational": 4, 00:12:08.272 "base_bdevs_list": [ 00:12:08.272 { 00:12:08.272 "name": "BaseBdev1", 00:12:08.272 "uuid": "3cf55af4-7389-555b-9205-23e092beb207", 00:12:08.272 "is_configured": true, 00:12:08.272 "data_offset": 0, 00:12:08.272 "data_size": 65536 00:12:08.272 }, 00:12:08.272 { 00:12:08.272 "name": "BaseBdev2", 00:12:08.272 "uuid": "cb2176a2-a56f-56c8-85e3-7a6076c7aad5", 00:12:08.272 "is_configured": true, 00:12:08.272 "data_offset": 0, 00:12:08.272 "data_size": 65536 00:12:08.272 }, 00:12:08.272 { 00:12:08.272 "name": "BaseBdev3", 00:12:08.272 "uuid": "b710ee79-1bc1-5814-8ef5-a429d618865a", 00:12:08.272 "is_configured": true, 00:12:08.272 "data_offset": 0, 00:12:08.272 "data_size": 65536 00:12:08.272 }, 00:12:08.272 { 00:12:08.272 "name": "BaseBdev4", 00:12:08.272 "uuid": "21bff7f7-f726-5f23-b07e-51f4513f91b3", 00:12:08.272 "is_configured": true, 00:12:08.272 "data_offset": 0, 00:12:08.272 "data_size": 65536 00:12:08.272 } 00:12:08.272 ] 00:12:08.272 }' 00:12:08.272 11:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.272 11:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.531 11:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:08.531 11:55:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.531 11:55:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.531 11:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:08.531 [2024-12-06 11:55:06.259689] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:08.531 11:55:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.532 11:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:08.532 11:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.532 11:55:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.532 11:55:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.532 11:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:08.791 11:55:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.791 11:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:08.791 11:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:08.791 11:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:08.791 11:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:08.791 11:55:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.791 11:55:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.791 [2024-12-06 11:55:06.335247] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:08.791 11:55:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.791 11:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:08.791 11:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.791 11:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.791 11:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.791 11:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.791 11:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.791 11:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.791 11:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.791 11:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.791 11:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.791 11:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.791 11:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.791 11:55:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.791 11:55:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.791 11:55:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.791 11:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.791 "name": "raid_bdev1", 00:12:08.791 "uuid": "76254418-8c60-4f98-8924-fd93fb760717", 00:12:08.791 "strip_size_kb": 0, 00:12:08.791 "state": "online", 00:12:08.791 "raid_level": "raid1", 00:12:08.791 "superblock": false, 00:12:08.791 "num_base_bdevs": 4, 00:12:08.791 "num_base_bdevs_discovered": 3, 00:12:08.791 "num_base_bdevs_operational": 3, 00:12:08.791 "base_bdevs_list": [ 00:12:08.791 { 00:12:08.791 "name": null, 00:12:08.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.791 "is_configured": false, 00:12:08.791 "data_offset": 0, 00:12:08.791 "data_size": 65536 00:12:08.791 }, 00:12:08.791 { 00:12:08.791 "name": "BaseBdev2", 00:12:08.791 "uuid": "cb2176a2-a56f-56c8-85e3-7a6076c7aad5", 00:12:08.791 "is_configured": true, 00:12:08.791 "data_offset": 0, 00:12:08.791 "data_size": 65536 00:12:08.791 }, 00:12:08.791 { 00:12:08.791 "name": "BaseBdev3", 00:12:08.791 "uuid": "b710ee79-1bc1-5814-8ef5-a429d618865a", 00:12:08.791 "is_configured": true, 00:12:08.791 "data_offset": 0, 00:12:08.791 "data_size": 65536 00:12:08.791 }, 00:12:08.791 { 00:12:08.791 "name": "BaseBdev4", 00:12:08.791 "uuid": "21bff7f7-f726-5f23-b07e-51f4513f91b3", 00:12:08.792 "is_configured": true, 00:12:08.792 "data_offset": 0, 00:12:08.792 "data_size": 65536 00:12:08.792 } 00:12:08.792 ] 00:12:08.792 }' 00:12:08.792 11:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.792 11:55:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.792 [2024-12-06 11:55:06.401152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:12:08.792 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:08.792 Zero copy mechanism will not be used. 00:12:08.792 Running I/O for 60 seconds... 00:12:09.052 11:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:09.052 11:55:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.052 11:55:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.052 [2024-12-06 11:55:06.768960] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:09.052 11:55:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.052 11:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:09.312 [2024-12-06 11:55:06.824417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:12:09.312 [2024-12-06 11:55:06.826348] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:09.312 [2024-12-06 11:55:06.935106] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:09.312 [2024-12-06 11:55:06.935634] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:09.312 [2024-12-06 11:55:07.064019] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:09.312 [2024-12-06 11:55:07.064649] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:09.882 [2024-12-06 11:55:07.397080] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:09.882 181.00 IOPS, 543.00 MiB/s [2024-12-06T11:55:07.640Z] [2024-12-06 11:55:07.612711] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:09.882 [2024-12-06 11:55:07.613383] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:10.142 11:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:10.142 11:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.142 11:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:10.142 11:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:10.142 11:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.142 11:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.142 11:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.142 11:55:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.142 11:55:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.142 11:55:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.142 11:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.142 "name": "raid_bdev1", 00:12:10.142 "uuid": "76254418-8c60-4f98-8924-fd93fb760717", 00:12:10.142 "strip_size_kb": 0, 00:12:10.142 "state": "online", 00:12:10.142 "raid_level": "raid1", 00:12:10.142 "superblock": false, 00:12:10.142 "num_base_bdevs": 4, 00:12:10.142 "num_base_bdevs_discovered": 4, 00:12:10.142 "num_base_bdevs_operational": 4, 00:12:10.142 "process": { 00:12:10.142 "type": "rebuild", 00:12:10.142 "target": "spare", 00:12:10.142 "progress": { 00:12:10.142 "blocks": 10240, 00:12:10.142 "percent": 15 00:12:10.142 } 00:12:10.142 }, 00:12:10.142 "base_bdevs_list": [ 00:12:10.142 { 00:12:10.142 "name": "spare", 00:12:10.142 "uuid": "d58107c8-5e95-55ef-a31f-6c626e4f6dbd", 00:12:10.142 "is_configured": true, 00:12:10.143 "data_offset": 0, 00:12:10.143 "data_size": 65536 00:12:10.143 }, 00:12:10.143 { 00:12:10.143 "name": "BaseBdev2", 00:12:10.143 "uuid": "cb2176a2-a56f-56c8-85e3-7a6076c7aad5", 00:12:10.143 "is_configured": true, 00:12:10.143 "data_offset": 0, 00:12:10.143 "data_size": 65536 00:12:10.143 }, 00:12:10.143 { 00:12:10.143 "name": "BaseBdev3", 00:12:10.143 "uuid": "b710ee79-1bc1-5814-8ef5-a429d618865a", 00:12:10.143 "is_configured": true, 00:12:10.143 "data_offset": 0, 00:12:10.143 "data_size": 65536 00:12:10.143 }, 00:12:10.143 { 00:12:10.143 "name": "BaseBdev4", 00:12:10.143 "uuid": "21bff7f7-f726-5f23-b07e-51f4513f91b3", 00:12:10.143 "is_configured": true, 00:12:10.143 "data_offset": 0, 00:12:10.143 "data_size": 65536 00:12:10.143 } 00:12:10.143 ] 00:12:10.143 }' 00:12:10.143 11:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.143 11:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:10.143 11:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.403 11:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:10.403 11:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:10.403 11:55:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.403 11:55:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.403 [2024-12-06 11:55:07.937650] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:10.403 [2024-12-06 11:55:07.944513] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:10.403 [2024-12-06 11:55:07.945800] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:10.403 [2024-12-06 11:55:08.052100] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:10.403 [2024-12-06 11:55:08.067621] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.403 [2024-12-06 11:55:08.067673] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:10.403 [2024-12-06 11:55:08.067705] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:10.403 [2024-12-06 11:55:08.079800] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:12:10.403 11:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.403 11:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:10.403 11:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.403 11:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.403 11:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.403 11:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.403 11:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:10.403 11:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.403 11:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.403 11:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.403 11:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.403 11:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.403 11:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.403 11:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.403 11:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.403 11:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.403 11:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.403 "name": "raid_bdev1", 00:12:10.403 "uuid": "76254418-8c60-4f98-8924-fd93fb760717", 00:12:10.403 "strip_size_kb": 0, 00:12:10.403 "state": "online", 00:12:10.403 "raid_level": "raid1", 00:12:10.403 "superblock": false, 00:12:10.403 "num_base_bdevs": 4, 00:12:10.403 "num_base_bdevs_discovered": 3, 00:12:10.403 "num_base_bdevs_operational": 3, 00:12:10.403 "base_bdevs_list": [ 00:12:10.403 { 00:12:10.403 "name": null, 00:12:10.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.403 "is_configured": false, 00:12:10.403 "data_offset": 0, 00:12:10.403 "data_size": 65536 00:12:10.403 }, 00:12:10.403 { 00:12:10.403 "name": "BaseBdev2", 00:12:10.403 "uuid": "cb2176a2-a56f-56c8-85e3-7a6076c7aad5", 00:12:10.403 "is_configured": true, 00:12:10.403 "data_offset": 0, 00:12:10.403 "data_size": 65536 00:12:10.403 }, 00:12:10.403 { 00:12:10.404 "name": "BaseBdev3", 00:12:10.404 "uuid": "b710ee79-1bc1-5814-8ef5-a429d618865a", 00:12:10.404 "is_configured": true, 00:12:10.404 "data_offset": 0, 00:12:10.404 "data_size": 65536 00:12:10.404 }, 00:12:10.404 { 00:12:10.404 "name": "BaseBdev4", 00:12:10.404 "uuid": "21bff7f7-f726-5f23-b07e-51f4513f91b3", 00:12:10.404 "is_configured": true, 00:12:10.404 "data_offset": 0, 00:12:10.404 "data_size": 65536 00:12:10.404 } 00:12:10.404 ] 00:12:10.404 }' 00:12:10.404 11:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.404 11:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.975 170.00 IOPS, 510.00 MiB/s [2024-12-06T11:55:08.733Z] 11:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:10.975 11:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.975 11:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:10.975 11:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:10.975 11:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.975 11:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.975 11:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.975 11:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.975 11:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.975 11:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.975 11:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.975 "name": "raid_bdev1", 00:12:10.975 "uuid": "76254418-8c60-4f98-8924-fd93fb760717", 00:12:10.975 "strip_size_kb": 0, 00:12:10.975 "state": "online", 00:12:10.975 "raid_level": "raid1", 00:12:10.975 "superblock": false, 00:12:10.975 "num_base_bdevs": 4, 00:12:10.975 "num_base_bdevs_discovered": 3, 00:12:10.975 "num_base_bdevs_operational": 3, 00:12:10.975 "base_bdevs_list": [ 00:12:10.975 { 00:12:10.975 "name": null, 00:12:10.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.975 "is_configured": false, 00:12:10.975 "data_offset": 0, 00:12:10.975 "data_size": 65536 00:12:10.975 }, 00:12:10.975 { 00:12:10.975 "name": "BaseBdev2", 00:12:10.975 "uuid": "cb2176a2-a56f-56c8-85e3-7a6076c7aad5", 00:12:10.975 "is_configured": true, 00:12:10.975 "data_offset": 0, 00:12:10.975 "data_size": 65536 00:12:10.975 }, 00:12:10.975 { 00:12:10.975 "name": "BaseBdev3", 00:12:10.975 "uuid": "b710ee79-1bc1-5814-8ef5-a429d618865a", 00:12:10.975 "is_configured": true, 00:12:10.975 "data_offset": 0, 00:12:10.975 "data_size": 65536 00:12:10.975 }, 00:12:10.975 { 00:12:10.975 "name": "BaseBdev4", 00:12:10.975 "uuid": "21bff7f7-f726-5f23-b07e-51f4513f91b3", 00:12:10.975 "is_configured": true, 00:12:10.975 "data_offset": 0, 00:12:10.975 "data_size": 65536 00:12:10.975 } 00:12:10.975 ] 00:12:10.975 }' 00:12:10.975 11:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.975 11:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:10.975 11:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.975 11:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:10.975 11:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:10.975 11:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.975 11:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.975 [2024-12-06 11:55:08.682978] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:10.975 11:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.975 11:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:10.975 [2024-12-06 11:55:08.722824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:12:10.975 [2024-12-06 11:55:08.724812] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:11.236 [2024-12-06 11:55:08.839421] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:11.236 [2024-12-06 11:55:08.839963] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:11.497 [2024-12-06 11:55:09.053626] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:11.497 [2024-12-06 11:55:09.054281] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:11.756 [2024-12-06 11:55:09.389465] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:11.756 [2024-12-06 11:55:09.390678] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:12.016 161.00 IOPS, 483.00 MiB/s [2024-12-06T11:55:09.774Z] [2024-12-06 11:55:09.604965] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:12.016 [2024-12-06 11:55:09.605542] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:12.016 11:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:12.016 11:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.016 11:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:12.016 11:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:12.016 11:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.016 11:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.016 11:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.016 11:55:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.016 11:55:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:12.016 11:55:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.016 11:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.016 "name": "raid_bdev1", 00:12:12.016 "uuid": "76254418-8c60-4f98-8924-fd93fb760717", 00:12:12.016 "strip_size_kb": 0, 00:12:12.016 "state": "online", 00:12:12.016 "raid_level": "raid1", 00:12:12.016 "superblock": false, 00:12:12.017 "num_base_bdevs": 4, 00:12:12.017 "num_base_bdevs_discovered": 4, 00:12:12.017 "num_base_bdevs_operational": 4, 00:12:12.017 "process": { 00:12:12.017 "type": "rebuild", 00:12:12.017 "target": "spare", 00:12:12.017 "progress": { 00:12:12.017 "blocks": 10240, 00:12:12.017 "percent": 15 00:12:12.017 } 00:12:12.017 }, 00:12:12.017 "base_bdevs_list": [ 00:12:12.017 { 00:12:12.017 "name": "spare", 00:12:12.017 "uuid": "d58107c8-5e95-55ef-a31f-6c626e4f6dbd", 00:12:12.017 "is_configured": true, 00:12:12.017 "data_offset": 0, 00:12:12.017 "data_size": 65536 00:12:12.017 }, 00:12:12.017 { 00:12:12.017 "name": "BaseBdev2", 00:12:12.017 "uuid": "cb2176a2-a56f-56c8-85e3-7a6076c7aad5", 00:12:12.017 "is_configured": true, 00:12:12.017 "data_offset": 0, 00:12:12.017 "data_size": 65536 00:12:12.017 }, 00:12:12.017 { 00:12:12.017 "name": "BaseBdev3", 00:12:12.017 "uuid": "b710ee79-1bc1-5814-8ef5-a429d618865a", 00:12:12.017 "is_configured": true, 00:12:12.017 "data_offset": 0, 00:12:12.017 "data_size": 65536 00:12:12.017 }, 00:12:12.017 { 00:12:12.017 "name": "BaseBdev4", 00:12:12.017 "uuid": "21bff7f7-f726-5f23-b07e-51f4513f91b3", 00:12:12.017 "is_configured": true, 00:12:12.017 "data_offset": 0, 00:12:12.017 "data_size": 65536 00:12:12.017 } 00:12:12.017 ] 00:12:12.017 }' 00:12:12.017 11:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.277 11:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:12.277 11:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:12.277 11:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:12.277 11:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:12.277 11:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:12.277 11:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:12.277 11:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:12.277 11:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:12.277 11:55:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.277 11:55:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:12.277 [2024-12-06 11:55:09.864013] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:12.277 [2024-12-06 11:55:09.949424] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:12.537 [2024-12-06 11:55:10.055008] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002870 00:12:12.537 [2024-12-06 11:55:10.055047] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002a10 00:12:12.537 [2024-12-06 11:55:10.063391] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:12.537 11:55:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.537 11:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:12.537 11:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:12.537 11:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:12.537 11:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.537 11:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:12.537 11:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:12.537 11:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.537 11:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.537 11:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.537 11:55:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.537 11:55:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:12.537 11:55:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.537 11:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.537 "name": "raid_bdev1", 00:12:12.537 "uuid": "76254418-8c60-4f98-8924-fd93fb760717", 00:12:12.537 "strip_size_kb": 0, 00:12:12.537 "state": "online", 00:12:12.537 "raid_level": "raid1", 00:12:12.537 "superblock": false, 00:12:12.537 "num_base_bdevs": 4, 00:12:12.537 "num_base_bdevs_discovered": 3, 00:12:12.537 "num_base_bdevs_operational": 3, 00:12:12.537 "process": { 00:12:12.537 "type": "rebuild", 00:12:12.537 "target": "spare", 00:12:12.537 "progress": { 00:12:12.537 "blocks": 14336, 00:12:12.537 "percent": 21 00:12:12.537 } 00:12:12.537 }, 00:12:12.537 "base_bdevs_list": [ 00:12:12.537 { 00:12:12.537 "name": "spare", 00:12:12.537 "uuid": "d58107c8-5e95-55ef-a31f-6c626e4f6dbd", 00:12:12.537 "is_configured": true, 00:12:12.537 "data_offset": 0, 00:12:12.537 "data_size": 65536 00:12:12.537 }, 00:12:12.537 { 00:12:12.537 "name": null, 00:12:12.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.537 "is_configured": false, 00:12:12.537 "data_offset": 0, 00:12:12.537 "data_size": 65536 00:12:12.537 }, 00:12:12.537 { 00:12:12.537 "name": "BaseBdev3", 00:12:12.537 "uuid": "b710ee79-1bc1-5814-8ef5-a429d618865a", 00:12:12.537 "is_configured": true, 00:12:12.537 "data_offset": 0, 00:12:12.537 "data_size": 65536 00:12:12.537 }, 00:12:12.537 { 00:12:12.537 "name": "BaseBdev4", 00:12:12.537 "uuid": "21bff7f7-f726-5f23-b07e-51f4513f91b3", 00:12:12.537 "is_configured": true, 00:12:12.537 "data_offset": 0, 00:12:12.537 "data_size": 65536 00:12:12.537 } 00:12:12.537 ] 00:12:12.537 }' 00:12:12.537 11:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.537 11:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:12.537 11:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:12.537 11:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:12.537 11:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=384 00:12:12.537 11:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:12.537 11:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:12.537 11:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.538 11:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:12.538 11:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:12.538 11:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.538 11:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.538 11:55:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.538 11:55:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:12.538 11:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.538 11:55:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.538 11:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.538 "name": "raid_bdev1", 00:12:12.538 "uuid": "76254418-8c60-4f98-8924-fd93fb760717", 00:12:12.538 "strip_size_kb": 0, 00:12:12.538 "state": "online", 00:12:12.538 "raid_level": "raid1", 00:12:12.538 "superblock": false, 00:12:12.538 "num_base_bdevs": 4, 00:12:12.538 "num_base_bdevs_discovered": 3, 00:12:12.538 "num_base_bdevs_operational": 3, 00:12:12.538 "process": { 00:12:12.538 "type": "rebuild", 00:12:12.538 "target": "spare", 00:12:12.538 "progress": { 00:12:12.538 "blocks": 14336, 00:12:12.538 "percent": 21 00:12:12.538 } 00:12:12.538 }, 00:12:12.538 "base_bdevs_list": [ 00:12:12.538 { 00:12:12.538 "name": "spare", 00:12:12.538 "uuid": "d58107c8-5e95-55ef-a31f-6c626e4f6dbd", 00:12:12.538 "is_configured": true, 00:12:12.538 "data_offset": 0, 00:12:12.538 "data_size": 65536 00:12:12.538 }, 00:12:12.538 { 00:12:12.538 "name": null, 00:12:12.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.538 "is_configured": false, 00:12:12.538 "data_offset": 0, 00:12:12.538 "data_size": 65536 00:12:12.538 }, 00:12:12.538 { 00:12:12.538 "name": "BaseBdev3", 00:12:12.538 "uuid": "b710ee79-1bc1-5814-8ef5-a429d618865a", 00:12:12.538 "is_configured": true, 00:12:12.538 "data_offset": 0, 00:12:12.538 "data_size": 65536 00:12:12.538 }, 00:12:12.538 { 00:12:12.538 "name": "BaseBdev4", 00:12:12.538 "uuid": "21bff7f7-f726-5f23-b07e-51f4513f91b3", 00:12:12.538 "is_configured": true, 00:12:12.538 "data_offset": 0, 00:12:12.538 "data_size": 65536 00:12:12.538 } 00:12:12.538 ] 00:12:12.538 }' 00:12:12.538 11:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.538 [2024-12-06 11:55:10.272670] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:12.538 11:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:12.538 11:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:12.799 11:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:12.799 11:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:13.059 131.00 IOPS, 393.00 MiB/s [2024-12-06T11:55:10.817Z] [2024-12-06 11:55:10.618641] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:13.319 [2024-12-06 11:55:10.972014] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:13.579 [2024-12-06 11:55:11.312159] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:13.579 [2024-12-06 11:55:11.312606] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:13.579 11:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:13.579 11:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:13.579 11:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.579 11:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:13.579 11:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:13.841 11:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.841 11:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.841 11:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.841 11:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.841 11:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.841 11:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.841 11:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.841 "name": "raid_bdev1", 00:12:13.841 "uuid": "76254418-8c60-4f98-8924-fd93fb760717", 00:12:13.841 "strip_size_kb": 0, 00:12:13.841 "state": "online", 00:12:13.841 "raid_level": "raid1", 00:12:13.841 "superblock": false, 00:12:13.841 "num_base_bdevs": 4, 00:12:13.841 "num_base_bdevs_discovered": 3, 00:12:13.841 "num_base_bdevs_operational": 3, 00:12:13.841 "process": { 00:12:13.841 "type": "rebuild", 00:12:13.841 "target": "spare", 00:12:13.841 "progress": { 00:12:13.841 "blocks": 34816, 00:12:13.841 "percent": 53 00:12:13.841 } 00:12:13.841 }, 00:12:13.841 "base_bdevs_list": [ 00:12:13.841 { 00:12:13.841 "name": "spare", 00:12:13.841 "uuid": "d58107c8-5e95-55ef-a31f-6c626e4f6dbd", 00:12:13.841 "is_configured": true, 00:12:13.841 "data_offset": 0, 00:12:13.841 "data_size": 65536 00:12:13.841 }, 00:12:13.841 { 00:12:13.841 "name": null, 00:12:13.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.841 "is_configured": false, 00:12:13.841 "data_offset": 0, 00:12:13.841 "data_size": 65536 00:12:13.841 }, 00:12:13.841 { 00:12:13.841 "name": "BaseBdev3", 00:12:13.841 "uuid": "b710ee79-1bc1-5814-8ef5-a429d618865a", 00:12:13.841 "is_configured": true, 00:12:13.841 "data_offset": 0, 00:12:13.841 "data_size": 65536 00:12:13.841 }, 00:12:13.841 { 00:12:13.841 "name": "BaseBdev4", 00:12:13.841 "uuid": "21bff7f7-f726-5f23-b07e-51f4513f91b3", 00:12:13.841 "is_configured": true, 00:12:13.841 "data_offset": 0, 00:12:13.841 "data_size": 65536 00:12:13.841 } 00:12:13.841 ] 00:12:13.841 }' 00:12:13.841 11:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.841 119.40 IOPS, 358.20 MiB/s [2024-12-06T11:55:11.599Z] 11:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:13.841 11:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.841 11:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:13.841 11:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:14.100 [2024-12-06 11:55:11.640307] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:14.100 [2024-12-06 11:55:11.849745] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:14.360 [2024-12-06 11:55:12.062538] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:14.930 105.50 IOPS, 316.50 MiB/s [2024-12-06T11:55:12.688Z] 11:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:14.930 11:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:14.930 11:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:14.930 11:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:14.930 11:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:14.930 11:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.930 11:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.930 11:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.930 11:55:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.930 [2024-12-06 11:55:12.497099] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:12:14.930 11:55:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.930 11:55:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.930 11:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:14.930 "name": "raid_bdev1", 00:12:14.930 "uuid": "76254418-8c60-4f98-8924-fd93fb760717", 00:12:14.930 "strip_size_kb": 0, 00:12:14.930 "state": "online", 00:12:14.930 "raid_level": "raid1", 00:12:14.930 "superblock": false, 00:12:14.930 "num_base_bdevs": 4, 00:12:14.930 "num_base_bdevs_discovered": 3, 00:12:14.930 "num_base_bdevs_operational": 3, 00:12:14.930 "process": { 00:12:14.930 "type": "rebuild", 00:12:14.930 "target": "spare", 00:12:14.930 "progress": { 00:12:14.930 "blocks": 51200, 00:12:14.930 "percent": 78 00:12:14.930 } 00:12:14.930 }, 00:12:14.930 "base_bdevs_list": [ 00:12:14.930 { 00:12:14.930 "name": "spare", 00:12:14.930 "uuid": "d58107c8-5e95-55ef-a31f-6c626e4f6dbd", 00:12:14.930 "is_configured": true, 00:12:14.930 "data_offset": 0, 00:12:14.930 "data_size": 65536 00:12:14.930 }, 00:12:14.930 { 00:12:14.930 "name": null, 00:12:14.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.930 "is_configured": false, 00:12:14.930 "data_offset": 0, 00:12:14.930 "data_size": 65536 00:12:14.930 }, 00:12:14.930 { 00:12:14.930 "name": "BaseBdev3", 00:12:14.930 "uuid": "b710ee79-1bc1-5814-8ef5-a429d618865a", 00:12:14.930 "is_configured": true, 00:12:14.930 "data_offset": 0, 00:12:14.930 "data_size": 65536 00:12:14.930 }, 00:12:14.930 { 00:12:14.930 "name": "BaseBdev4", 00:12:14.930 "uuid": "21bff7f7-f726-5f23-b07e-51f4513f91b3", 00:12:14.930 "is_configured": true, 00:12:14.930 "data_offset": 0, 00:12:14.930 "data_size": 65536 00:12:14.930 } 00:12:14.930 ] 00:12:14.930 }' 00:12:14.930 11:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.930 11:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:14.930 11:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.930 11:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:14.930 11:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:15.190 [2024-12-06 11:55:12.931428] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:15.762 [2024-12-06 11:55:13.368035] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:15.762 95.00 IOPS, 285.00 MiB/s [2024-12-06T11:55:13.520Z] [2024-12-06 11:55:13.472805] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:15.762 [2024-12-06 11:55:13.476238] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.022 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:16.022 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:16.022 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:16.022 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:16.022 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:16.022 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:16.022 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.022 11:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.022 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.022 11:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.022 11:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.022 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:16.022 "name": "raid_bdev1", 00:12:16.022 "uuid": "76254418-8c60-4f98-8924-fd93fb760717", 00:12:16.022 "strip_size_kb": 0, 00:12:16.022 "state": "online", 00:12:16.022 "raid_level": "raid1", 00:12:16.022 "superblock": false, 00:12:16.022 "num_base_bdevs": 4, 00:12:16.022 "num_base_bdevs_discovered": 3, 00:12:16.022 "num_base_bdevs_operational": 3, 00:12:16.022 "base_bdevs_list": [ 00:12:16.022 { 00:12:16.022 "name": "spare", 00:12:16.022 "uuid": "d58107c8-5e95-55ef-a31f-6c626e4f6dbd", 00:12:16.022 "is_configured": true, 00:12:16.022 "data_offset": 0, 00:12:16.022 "data_size": 65536 00:12:16.022 }, 00:12:16.022 { 00:12:16.022 "name": null, 00:12:16.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.022 "is_configured": false, 00:12:16.022 "data_offset": 0, 00:12:16.022 "data_size": 65536 00:12:16.022 }, 00:12:16.022 { 00:12:16.022 "name": "BaseBdev3", 00:12:16.022 "uuid": "b710ee79-1bc1-5814-8ef5-a429d618865a", 00:12:16.022 "is_configured": true, 00:12:16.022 "data_offset": 0, 00:12:16.022 "data_size": 65536 00:12:16.022 }, 00:12:16.022 { 00:12:16.022 "name": "BaseBdev4", 00:12:16.022 "uuid": "21bff7f7-f726-5f23-b07e-51f4513f91b3", 00:12:16.022 "is_configured": true, 00:12:16.022 "data_offset": 0, 00:12:16.022 "data_size": 65536 00:12:16.022 } 00:12:16.022 ] 00:12:16.022 }' 00:12:16.022 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:16.022 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:16.022 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:16.022 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:16.022 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:16.022 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:16.022 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:16.022 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:16.022 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:16.022 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:16.022 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.023 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.023 11:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.023 11:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.023 11:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.283 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:16.283 "name": "raid_bdev1", 00:12:16.283 "uuid": "76254418-8c60-4f98-8924-fd93fb760717", 00:12:16.283 "strip_size_kb": 0, 00:12:16.283 "state": "online", 00:12:16.283 "raid_level": "raid1", 00:12:16.283 "superblock": false, 00:12:16.283 "num_base_bdevs": 4, 00:12:16.283 "num_base_bdevs_discovered": 3, 00:12:16.283 "num_base_bdevs_operational": 3, 00:12:16.283 "base_bdevs_list": [ 00:12:16.283 { 00:12:16.283 "name": "spare", 00:12:16.283 "uuid": "d58107c8-5e95-55ef-a31f-6c626e4f6dbd", 00:12:16.283 "is_configured": true, 00:12:16.283 "data_offset": 0, 00:12:16.283 "data_size": 65536 00:12:16.283 }, 00:12:16.283 { 00:12:16.283 "name": null, 00:12:16.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.283 "is_configured": false, 00:12:16.283 "data_offset": 0, 00:12:16.283 "data_size": 65536 00:12:16.283 }, 00:12:16.283 { 00:12:16.283 "name": "BaseBdev3", 00:12:16.283 "uuid": "b710ee79-1bc1-5814-8ef5-a429d618865a", 00:12:16.283 "is_configured": true, 00:12:16.283 "data_offset": 0, 00:12:16.283 "data_size": 65536 00:12:16.283 }, 00:12:16.283 { 00:12:16.283 "name": "BaseBdev4", 00:12:16.283 "uuid": "21bff7f7-f726-5f23-b07e-51f4513f91b3", 00:12:16.283 "is_configured": true, 00:12:16.283 "data_offset": 0, 00:12:16.283 "data_size": 65536 00:12:16.283 } 00:12:16.283 ] 00:12:16.283 }' 00:12:16.283 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:16.283 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:16.283 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:16.283 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:16.283 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:16.283 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.283 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.283 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.283 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.283 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:16.283 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.283 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.283 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.283 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.283 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.283 11:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.283 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.283 11:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.283 11:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.283 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.283 "name": "raid_bdev1", 00:12:16.283 "uuid": "76254418-8c60-4f98-8924-fd93fb760717", 00:12:16.283 "strip_size_kb": 0, 00:12:16.283 "state": "online", 00:12:16.283 "raid_level": "raid1", 00:12:16.283 "superblock": false, 00:12:16.283 "num_base_bdevs": 4, 00:12:16.283 "num_base_bdevs_discovered": 3, 00:12:16.283 "num_base_bdevs_operational": 3, 00:12:16.283 "base_bdevs_list": [ 00:12:16.283 { 00:12:16.283 "name": "spare", 00:12:16.283 "uuid": "d58107c8-5e95-55ef-a31f-6c626e4f6dbd", 00:12:16.283 "is_configured": true, 00:12:16.283 "data_offset": 0, 00:12:16.283 "data_size": 65536 00:12:16.283 }, 00:12:16.283 { 00:12:16.283 "name": null, 00:12:16.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.283 "is_configured": false, 00:12:16.283 "data_offset": 0, 00:12:16.283 "data_size": 65536 00:12:16.283 }, 00:12:16.283 { 00:12:16.283 "name": "BaseBdev3", 00:12:16.283 "uuid": "b710ee79-1bc1-5814-8ef5-a429d618865a", 00:12:16.283 "is_configured": true, 00:12:16.283 "data_offset": 0, 00:12:16.283 "data_size": 65536 00:12:16.283 }, 00:12:16.284 { 00:12:16.284 "name": "BaseBdev4", 00:12:16.284 "uuid": "21bff7f7-f726-5f23-b07e-51f4513f91b3", 00:12:16.284 "is_configured": true, 00:12:16.284 "data_offset": 0, 00:12:16.284 "data_size": 65536 00:12:16.284 } 00:12:16.284 ] 00:12:16.284 }' 00:12:16.284 11:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.284 11:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.544 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:16.544 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.544 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.544 [2024-12-06 11:55:14.296669] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:16.544 [2024-12-06 11:55:14.296706] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:16.804 00:12:16.804 Latency(us) 00:12:16.804 [2024-12-06T11:55:14.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:16.804 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:16.804 raid_bdev1 : 7.94 87.73 263.20 0.00 0.00 14946.51 266.51 114473.36 00:12:16.804 [2024-12-06T11:55:14.562Z] =================================================================================================================== 00:12:16.804 [2024-12-06T11:55:14.562Z] Total : 87.73 263.20 0.00 0.00 14946.51 266.51 114473.36 00:12:16.804 [2024-12-06 11:55:14.335591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.804 [2024-12-06 11:55:14.335646] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:16.804 [2024-12-06 11:55:14.335741] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:16.804 [2024-12-06 11:55:14.335767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:16.804 { 00:12:16.804 "results": [ 00:12:16.804 { 00:12:16.804 "job": "raid_bdev1", 00:12:16.805 "core_mask": "0x1", 00:12:16.805 "workload": "randrw", 00:12:16.805 "percentage": 50, 00:12:16.805 "status": "finished", 00:12:16.805 "queue_depth": 2, 00:12:16.805 "io_size": 3145728, 00:12:16.805 "runtime": 7.944552, 00:12:16.805 "iops": 87.73307796336408, 00:12:16.805 "mibps": 263.1992338900923, 00:12:16.805 "io_failed": 0, 00:12:16.805 "io_timeout": 0, 00:12:16.805 "avg_latency_us": 14946.50963016797, 00:12:16.805 "min_latency_us": 266.5082969432314, 00:12:16.805 "max_latency_us": 114473.36244541485 00:12:16.805 } 00:12:16.805 ], 00:12:16.805 "core_count": 1 00:12:16.805 } 00:12:16.805 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.805 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.805 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:16.805 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.805 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.805 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.805 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:16.805 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:16.805 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:16.805 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:16.805 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:16.805 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:16.805 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:16.805 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:16.805 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:16.805 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:16.805 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:16.805 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:16.805 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:17.066 /dev/nbd0 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:17.066 1+0 records in 00:12:17.066 1+0 records out 00:12:17.066 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277561 s, 14.8 MB/s 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:17.066 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:12:17.327 /dev/nbd1 00:12:17.327 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:17.327 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:17.327 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:17.327 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:17.327 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:17.327 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:17.327 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:17.327 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:17.327 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:17.327 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:17.327 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:17.327 1+0 records in 00:12:17.327 1+0 records out 00:12:17.327 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220927 s, 18.5 MB/s 00:12:17.327 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.327 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:17.327 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.327 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:17.327 11:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:17.327 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:17.327 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:17.327 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:17.327 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:17.327 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:17.327 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:17.327 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:17.327 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:17.327 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:17.327 11:55:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:17.587 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:17.587 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:17.587 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:17.587 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:17.587 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:17.587 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:17.587 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:17.587 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:17.587 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:17.587 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:12:17.587 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:12:17.587 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:17.587 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:12:17.587 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:17.587 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:17.587 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:17.587 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:17.587 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:17.587 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:17.587 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:12:17.587 /dev/nbd1 00:12:17.846 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:17.847 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:17.847 11:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:17.847 11:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:17.847 11:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:17.847 11:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:17.847 11:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:17.847 11:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:17.847 11:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:17.847 11:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:17.847 11:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:17.847 1+0 records in 00:12:17.847 1+0 records out 00:12:17.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388119 s, 10.6 MB/s 00:12:17.847 11:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.847 11:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:17.847 11:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.847 11:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:17.847 11:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:17.847 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:17.847 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:17.847 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:17.847 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:17.847 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:17.847 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:17.847 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:17.847 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:17.847 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:17.847 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 88953 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 88953 ']' 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 88953 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:18.107 11:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88953 00:12:18.367 11:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:18.368 11:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:18.368 11:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88953' 00:12:18.368 killing process with pid 88953 00:12:18.368 11:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 88953 00:12:18.368 Received shutdown signal, test time was about 9.500073 seconds 00:12:18.368 00:12:18.368 Latency(us) 00:12:18.368 [2024-12-06T11:55:16.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:18.368 [2024-12-06T11:55:16.126Z] =================================================================================================================== 00:12:18.368 [2024-12-06T11:55:16.126Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:18.368 [2024-12-06 11:55:15.885103] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:18.368 11:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 88953 00:12:18.368 [2024-12-06 11:55:15.931440] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:18.628 11:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:18.628 00:12:18.628 real 0m11.394s 00:12:18.628 user 0m14.742s 00:12:18.628 sys 0m1.628s 00:12:18.628 11:55:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:18.628 11:55:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.628 ************************************ 00:12:18.628 END TEST raid_rebuild_test_io 00:12:18.628 ************************************ 00:12:18.628 11:55:16 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:12:18.628 11:55:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:18.628 11:55:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:18.628 11:55:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:18.628 ************************************ 00:12:18.628 START TEST raid_rebuild_test_sb_io 00:12:18.628 ************************************ 00:12:18.628 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:12:18.628 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:18.628 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:18.628 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:18.628 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:18.628 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:18.628 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:18.628 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:18.628 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:18.628 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:18.628 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:18.628 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:18.628 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:18.628 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:18.628 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:18.628 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:18.628 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:18.628 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:18.628 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:18.628 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:18.628 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:18.628 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:18.628 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:18.629 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:18.629 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:18.629 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:18.629 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:18.629 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:18.629 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:18.629 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:18.629 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:18.629 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89343 00:12:18.629 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89343 00:12:18.629 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:18.629 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 89343 ']' 00:12:18.629 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.629 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:18.629 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.629 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:18.629 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.629 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:18.629 Zero copy mechanism will not be used. 00:12:18.629 [2024-12-06 11:55:16.333206] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:18.629 [2024-12-06 11:55:16.333352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89343 ] 00:12:18.888 [2024-12-06 11:55:16.477121] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.888 [2024-12-06 11:55:16.521015] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.888 [2024-12-06 11:55:16.563663] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.888 [2024-12-06 11:55:16.563702] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.459 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:19.459 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:12:19.459 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:19.459 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:19.459 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.459 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.459 BaseBdev1_malloc 00:12:19.459 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.459 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:19.459 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.459 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.459 [2024-12-06 11:55:17.170281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:19.459 [2024-12-06 11:55:17.170426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.459 [2024-12-06 11:55:17.170471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:19.459 [2024-12-06 11:55:17.170512] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.459 [2024-12-06 11:55:17.172669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.459 [2024-12-06 11:55:17.172756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:19.459 BaseBdev1 00:12:19.460 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.460 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:19.460 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:19.460 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.460 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.460 BaseBdev2_malloc 00:12:19.460 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.460 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:19.460 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.460 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.460 [2024-12-06 11:55:17.212933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:19.460 [2024-12-06 11:55:17.213136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.460 [2024-12-06 11:55:17.213194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:19.460 [2024-12-06 11:55:17.213219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.733 [2024-12-06 11:55:17.217521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.734 [2024-12-06 11:55:17.217575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:19.734 BaseBdev2 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.734 BaseBdev3_malloc 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.734 [2024-12-06 11:55:17.243534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:19.734 [2024-12-06 11:55:17.243588] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.734 [2024-12-06 11:55:17.243614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:19.734 [2024-12-06 11:55:17.243623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.734 [2024-12-06 11:55:17.245641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.734 [2024-12-06 11:55:17.245676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:19.734 BaseBdev3 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.734 BaseBdev4_malloc 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.734 [2024-12-06 11:55:17.272061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:19.734 [2024-12-06 11:55:17.272108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.734 [2024-12-06 11:55:17.272128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:19.734 [2024-12-06 11:55:17.272137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.734 [2024-12-06 11:55:17.274149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.734 [2024-12-06 11:55:17.274238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:19.734 BaseBdev4 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.734 spare_malloc 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.734 spare_delay 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.734 [2024-12-06 11:55:17.312518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:19.734 [2024-12-06 11:55:17.312630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.734 [2024-12-06 11:55:17.312653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:19.734 [2024-12-06 11:55:17.312662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.734 [2024-12-06 11:55:17.314661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.734 [2024-12-06 11:55:17.314698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:19.734 spare 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.734 [2024-12-06 11:55:17.324570] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:19.734 [2024-12-06 11:55:17.326312] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:19.734 [2024-12-06 11:55:17.326419] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:19.734 [2024-12-06 11:55:17.326472] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:19.734 [2024-12-06 11:55:17.326627] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:19.734 [2024-12-06 11:55:17.326638] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:19.734 [2024-12-06 11:55:17.326875] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:12:19.734 [2024-12-06 11:55:17.327005] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:19.734 [2024-12-06 11:55:17.327019] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:19.734 [2024-12-06 11:55:17.327144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.734 "name": "raid_bdev1", 00:12:19.734 "uuid": "c12bb8f9-ac59-40cd-8007-a3eab39d96e9", 00:12:19.734 "strip_size_kb": 0, 00:12:19.734 "state": "online", 00:12:19.734 "raid_level": "raid1", 00:12:19.734 "superblock": true, 00:12:19.734 "num_base_bdevs": 4, 00:12:19.734 "num_base_bdevs_discovered": 4, 00:12:19.734 "num_base_bdevs_operational": 4, 00:12:19.734 "base_bdevs_list": [ 00:12:19.734 { 00:12:19.734 "name": "BaseBdev1", 00:12:19.734 "uuid": "f23b066d-4b7f-52e2-a423-894cae6bd208", 00:12:19.734 "is_configured": true, 00:12:19.734 "data_offset": 2048, 00:12:19.734 "data_size": 63488 00:12:19.734 }, 00:12:19.734 { 00:12:19.734 "name": "BaseBdev2", 00:12:19.734 "uuid": "d7991805-ecab-53e0-98d3-bcbbe425df32", 00:12:19.734 "is_configured": true, 00:12:19.734 "data_offset": 2048, 00:12:19.734 "data_size": 63488 00:12:19.734 }, 00:12:19.734 { 00:12:19.734 "name": "BaseBdev3", 00:12:19.734 "uuid": "07d8b050-2c21-5a3f-8ab8-3359685c4ab5", 00:12:19.734 "is_configured": true, 00:12:19.734 "data_offset": 2048, 00:12:19.734 "data_size": 63488 00:12:19.734 }, 00:12:19.734 { 00:12:19.734 "name": "BaseBdev4", 00:12:19.734 "uuid": "d1f94e88-523e-5c10-9df6-e28480dfd77d", 00:12:19.734 "is_configured": true, 00:12:19.734 "data_offset": 2048, 00:12:19.734 "data_size": 63488 00:12:19.734 } 00:12:19.734 ] 00:12:19.734 }' 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.734 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.016 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:20.016 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.016 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.016 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:20.016 [2024-12-06 11:55:17.736209] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:20.016 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.286 [2024-12-06 11:55:17.827735] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.286 "name": "raid_bdev1", 00:12:20.286 "uuid": "c12bb8f9-ac59-40cd-8007-a3eab39d96e9", 00:12:20.286 "strip_size_kb": 0, 00:12:20.286 "state": "online", 00:12:20.286 "raid_level": "raid1", 00:12:20.286 "superblock": true, 00:12:20.286 "num_base_bdevs": 4, 00:12:20.286 "num_base_bdevs_discovered": 3, 00:12:20.286 "num_base_bdevs_operational": 3, 00:12:20.286 "base_bdevs_list": [ 00:12:20.286 { 00:12:20.286 "name": null, 00:12:20.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.286 "is_configured": false, 00:12:20.286 "data_offset": 0, 00:12:20.286 "data_size": 63488 00:12:20.286 }, 00:12:20.286 { 00:12:20.286 "name": "BaseBdev2", 00:12:20.286 "uuid": "d7991805-ecab-53e0-98d3-bcbbe425df32", 00:12:20.286 "is_configured": true, 00:12:20.286 "data_offset": 2048, 00:12:20.286 "data_size": 63488 00:12:20.286 }, 00:12:20.286 { 00:12:20.286 "name": "BaseBdev3", 00:12:20.286 "uuid": "07d8b050-2c21-5a3f-8ab8-3359685c4ab5", 00:12:20.286 "is_configured": true, 00:12:20.286 "data_offset": 2048, 00:12:20.286 "data_size": 63488 00:12:20.286 }, 00:12:20.286 { 00:12:20.286 "name": "BaseBdev4", 00:12:20.286 "uuid": "d1f94e88-523e-5c10-9df6-e28480dfd77d", 00:12:20.286 "is_configured": true, 00:12:20.286 "data_offset": 2048, 00:12:20.286 "data_size": 63488 00:12:20.286 } 00:12:20.286 ] 00:12:20.286 }' 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.286 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.286 [2024-12-06 11:55:17.913644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:12:20.286 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:20.286 Zero copy mechanism will not be used. 00:12:20.286 Running I/O for 60 seconds... 00:12:20.547 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:20.547 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.547 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.547 [2024-12-06 11:55:18.275347] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:20.547 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.807 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:20.807 [2024-12-06 11:55:18.310134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:12:20.807 [2024-12-06 11:55:18.312160] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:20.807 [2024-12-06 11:55:18.420500] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:20.807 [2024-12-06 11:55:18.421718] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:21.092 [2024-12-06 11:55:18.635298] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:21.092 [2024-12-06 11:55:18.635980] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:21.352 159.00 IOPS, 477.00 MiB/s [2024-12-06T11:55:19.110Z] [2024-12-06 11:55:18.988722] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:21.612 [2024-12-06 11:55:19.220089] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:21.612 [2024-12-06 11:55:19.220488] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:21.612 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:21.612 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:21.612 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:21.612 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:21.612 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:21.612 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.612 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.612 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.612 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.612 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.612 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.612 "name": "raid_bdev1", 00:12:21.612 "uuid": "c12bb8f9-ac59-40cd-8007-a3eab39d96e9", 00:12:21.612 "strip_size_kb": 0, 00:12:21.612 "state": "online", 00:12:21.612 "raid_level": "raid1", 00:12:21.612 "superblock": true, 00:12:21.612 "num_base_bdevs": 4, 00:12:21.612 "num_base_bdevs_discovered": 4, 00:12:21.612 "num_base_bdevs_operational": 4, 00:12:21.612 "process": { 00:12:21.612 "type": "rebuild", 00:12:21.612 "target": "spare", 00:12:21.612 "progress": { 00:12:21.612 "blocks": 10240, 00:12:21.612 "percent": 16 00:12:21.612 } 00:12:21.612 }, 00:12:21.612 "base_bdevs_list": [ 00:12:21.612 { 00:12:21.612 "name": "spare", 00:12:21.612 "uuid": "3a845da6-3d3d-5812-92cf-770720db5f7a", 00:12:21.612 "is_configured": true, 00:12:21.612 "data_offset": 2048, 00:12:21.612 "data_size": 63488 00:12:21.612 }, 00:12:21.612 { 00:12:21.612 "name": "BaseBdev2", 00:12:21.612 "uuid": "d7991805-ecab-53e0-98d3-bcbbe425df32", 00:12:21.612 "is_configured": true, 00:12:21.612 "data_offset": 2048, 00:12:21.612 "data_size": 63488 00:12:21.612 }, 00:12:21.612 { 00:12:21.612 "name": "BaseBdev3", 00:12:21.612 "uuid": "07d8b050-2c21-5a3f-8ab8-3359685c4ab5", 00:12:21.612 "is_configured": true, 00:12:21.612 "data_offset": 2048, 00:12:21.612 "data_size": 63488 00:12:21.612 }, 00:12:21.612 { 00:12:21.612 "name": "BaseBdev4", 00:12:21.612 "uuid": "d1f94e88-523e-5c10-9df6-e28480dfd77d", 00:12:21.612 "is_configured": true, 00:12:21.612 "data_offset": 2048, 00:12:21.612 "data_size": 63488 00:12:21.612 } 00:12:21.612 ] 00:12:21.612 }' 00:12:21.612 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:21.871 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:21.871 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:21.871 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:21.871 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:21.871 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.871 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.871 [2024-12-06 11:55:19.454268] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:21.872 [2024-12-06 11:55:19.538974] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:21.872 [2024-12-06 11:55:19.555504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.872 [2024-12-06 11:55:19.555552] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:21.872 [2024-12-06 11:55:19.555577] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:21.872 [2024-12-06 11:55:19.578573] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:12:21.872 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.872 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:21.872 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.872 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.872 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.872 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.872 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:21.872 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.872 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.872 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.872 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.872 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.872 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.872 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.872 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.872 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.131 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.131 "name": "raid_bdev1", 00:12:22.131 "uuid": "c12bb8f9-ac59-40cd-8007-a3eab39d96e9", 00:12:22.131 "strip_size_kb": 0, 00:12:22.131 "state": "online", 00:12:22.131 "raid_level": "raid1", 00:12:22.131 "superblock": true, 00:12:22.131 "num_base_bdevs": 4, 00:12:22.131 "num_base_bdevs_discovered": 3, 00:12:22.131 "num_base_bdevs_operational": 3, 00:12:22.131 "base_bdevs_list": [ 00:12:22.131 { 00:12:22.131 "name": null, 00:12:22.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.131 "is_configured": false, 00:12:22.131 "data_offset": 0, 00:12:22.131 "data_size": 63488 00:12:22.131 }, 00:12:22.131 { 00:12:22.131 "name": "BaseBdev2", 00:12:22.131 "uuid": "d7991805-ecab-53e0-98d3-bcbbe425df32", 00:12:22.131 "is_configured": true, 00:12:22.131 "data_offset": 2048, 00:12:22.131 "data_size": 63488 00:12:22.131 }, 00:12:22.131 { 00:12:22.131 "name": "BaseBdev3", 00:12:22.131 "uuid": "07d8b050-2c21-5a3f-8ab8-3359685c4ab5", 00:12:22.131 "is_configured": true, 00:12:22.131 "data_offset": 2048, 00:12:22.131 "data_size": 63488 00:12:22.131 }, 00:12:22.131 { 00:12:22.131 "name": "BaseBdev4", 00:12:22.131 "uuid": "d1f94e88-523e-5c10-9df6-e28480dfd77d", 00:12:22.131 "is_configured": true, 00:12:22.131 "data_offset": 2048, 00:12:22.131 "data_size": 63488 00:12:22.131 } 00:12:22.131 ] 00:12:22.131 }' 00:12:22.131 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.131 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.390 155.50 IOPS, 466.50 MiB/s [2024-12-06T11:55:20.148Z] 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:22.390 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:22.390 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:22.390 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:22.390 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:22.390 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.390 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.390 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.390 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.390 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.390 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:22.390 "name": "raid_bdev1", 00:12:22.390 "uuid": "c12bb8f9-ac59-40cd-8007-a3eab39d96e9", 00:12:22.391 "strip_size_kb": 0, 00:12:22.391 "state": "online", 00:12:22.391 "raid_level": "raid1", 00:12:22.391 "superblock": true, 00:12:22.391 "num_base_bdevs": 4, 00:12:22.391 "num_base_bdevs_discovered": 3, 00:12:22.391 "num_base_bdevs_operational": 3, 00:12:22.391 "base_bdevs_list": [ 00:12:22.391 { 00:12:22.391 "name": null, 00:12:22.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.391 "is_configured": false, 00:12:22.391 "data_offset": 0, 00:12:22.391 "data_size": 63488 00:12:22.391 }, 00:12:22.391 { 00:12:22.391 "name": "BaseBdev2", 00:12:22.391 "uuid": "d7991805-ecab-53e0-98d3-bcbbe425df32", 00:12:22.391 "is_configured": true, 00:12:22.391 "data_offset": 2048, 00:12:22.391 "data_size": 63488 00:12:22.391 }, 00:12:22.391 { 00:12:22.391 "name": "BaseBdev3", 00:12:22.391 "uuid": "07d8b050-2c21-5a3f-8ab8-3359685c4ab5", 00:12:22.391 "is_configured": true, 00:12:22.391 "data_offset": 2048, 00:12:22.391 "data_size": 63488 00:12:22.391 }, 00:12:22.391 { 00:12:22.391 "name": "BaseBdev4", 00:12:22.391 "uuid": "d1f94e88-523e-5c10-9df6-e28480dfd77d", 00:12:22.391 "is_configured": true, 00:12:22.391 "data_offset": 2048, 00:12:22.391 "data_size": 63488 00:12:22.391 } 00:12:22.391 ] 00:12:22.391 }' 00:12:22.391 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:22.650 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:22.650 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:22.650 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:22.650 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:22.650 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.650 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.650 [2024-12-06 11:55:20.206691] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:22.650 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.650 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:22.650 [2024-12-06 11:55:20.241859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:12:22.650 [2024-12-06 11:55:20.243882] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:22.650 [2024-12-06 11:55:20.359014] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:22.650 [2024-12-06 11:55:20.359360] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:22.911 [2024-12-06 11:55:20.472606] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:22.911 [2024-12-06 11:55:20.473229] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:23.171 [2024-12-06 11:55:20.806902] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:23.171 [2024-12-06 11:55:20.807247] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:23.430 149.33 IOPS, 448.00 MiB/s [2024-12-06T11:55:21.188Z] [2024-12-06 11:55:21.024530] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:23.430 [2024-12-06 11:55:21.025149] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:23.689 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:23.689 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.689 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:23.689 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:23.689 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.689 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.689 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.689 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.689 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.689 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.689 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:23.689 "name": "raid_bdev1", 00:12:23.689 "uuid": "c12bb8f9-ac59-40cd-8007-a3eab39d96e9", 00:12:23.689 "strip_size_kb": 0, 00:12:23.689 "state": "online", 00:12:23.689 "raid_level": "raid1", 00:12:23.689 "superblock": true, 00:12:23.689 "num_base_bdevs": 4, 00:12:23.689 "num_base_bdevs_discovered": 4, 00:12:23.689 "num_base_bdevs_operational": 4, 00:12:23.689 "process": { 00:12:23.689 "type": "rebuild", 00:12:23.689 "target": "spare", 00:12:23.689 "progress": { 00:12:23.689 "blocks": 12288, 00:12:23.689 "percent": 19 00:12:23.689 } 00:12:23.690 }, 00:12:23.690 "base_bdevs_list": [ 00:12:23.690 { 00:12:23.690 "name": "spare", 00:12:23.690 "uuid": "3a845da6-3d3d-5812-92cf-770720db5f7a", 00:12:23.690 "is_configured": true, 00:12:23.690 "data_offset": 2048, 00:12:23.690 "data_size": 63488 00:12:23.690 }, 00:12:23.690 { 00:12:23.690 "name": "BaseBdev2", 00:12:23.690 "uuid": "d7991805-ecab-53e0-98d3-bcbbe425df32", 00:12:23.690 "is_configured": true, 00:12:23.690 "data_offset": 2048, 00:12:23.690 "data_size": 63488 00:12:23.690 }, 00:12:23.690 { 00:12:23.690 "name": "BaseBdev3", 00:12:23.690 "uuid": "07d8b050-2c21-5a3f-8ab8-3359685c4ab5", 00:12:23.690 "is_configured": true, 00:12:23.690 "data_offset": 2048, 00:12:23.690 "data_size": 63488 00:12:23.690 }, 00:12:23.690 { 00:12:23.690 "name": "BaseBdev4", 00:12:23.690 "uuid": "d1f94e88-523e-5c10-9df6-e28480dfd77d", 00:12:23.690 "is_configured": true, 00:12:23.690 "data_offset": 2048, 00:12:23.690 "data_size": 63488 00:12:23.690 } 00:12:23.690 ] 00:12:23.690 }' 00:12:23.690 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:23.690 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:23.690 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:23.690 [2024-12-06 11:55:21.351330] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:23.690 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:23.690 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:23.690 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:23.690 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:23.690 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:23.690 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:23.690 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:23.690 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:23.690 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.690 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.690 [2024-12-06 11:55:21.384318] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:23.948 [2024-12-06 11:55:21.580299] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:24.208 [2024-12-06 11:55:21.794065] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002870 00:12:24.208 [2024-12-06 11:55:21.794096] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002a10 00:12:24.208 [2024-12-06 11:55:21.801520] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:24.208 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.208 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:24.208 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:24.208 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:24.208 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:24.208 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:24.208 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:24.208 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:24.208 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.208 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.208 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.208 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.208 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.208 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.208 "name": "raid_bdev1", 00:12:24.208 "uuid": "c12bb8f9-ac59-40cd-8007-a3eab39d96e9", 00:12:24.208 "strip_size_kb": 0, 00:12:24.208 "state": "online", 00:12:24.208 "raid_level": "raid1", 00:12:24.208 "superblock": true, 00:12:24.208 "num_base_bdevs": 4, 00:12:24.208 "num_base_bdevs_discovered": 3, 00:12:24.208 "num_base_bdevs_operational": 3, 00:12:24.208 "process": { 00:12:24.208 "type": "rebuild", 00:12:24.208 "target": "spare", 00:12:24.208 "progress": { 00:12:24.208 "blocks": 16384, 00:12:24.208 "percent": 25 00:12:24.208 } 00:12:24.208 }, 00:12:24.208 "base_bdevs_list": [ 00:12:24.208 { 00:12:24.208 "name": "spare", 00:12:24.208 "uuid": "3a845da6-3d3d-5812-92cf-770720db5f7a", 00:12:24.208 "is_configured": true, 00:12:24.208 "data_offset": 2048, 00:12:24.208 "data_size": 63488 00:12:24.208 }, 00:12:24.208 { 00:12:24.208 "name": null, 00:12:24.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.208 "is_configured": false, 00:12:24.208 "data_offset": 0, 00:12:24.208 "data_size": 63488 00:12:24.208 }, 00:12:24.208 { 00:12:24.208 "name": "BaseBdev3", 00:12:24.208 "uuid": "07d8b050-2c21-5a3f-8ab8-3359685c4ab5", 00:12:24.208 "is_configured": true, 00:12:24.208 "data_offset": 2048, 00:12:24.208 "data_size": 63488 00:12:24.208 }, 00:12:24.208 { 00:12:24.208 "name": "BaseBdev4", 00:12:24.208 "uuid": "d1f94e88-523e-5c10-9df6-e28480dfd77d", 00:12:24.208 "is_configured": true, 00:12:24.208 "data_offset": 2048, 00:12:24.208 "data_size": 63488 00:12:24.208 } 00:12:24.208 ] 00:12:24.208 }' 00:12:24.208 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.208 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:24.208 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.208 127.75 IOPS, 383.25 MiB/s [2024-12-06T11:55:21.966Z] 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:24.208 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=395 00:12:24.208 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:24.208 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:24.208 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:24.208 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:24.208 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:24.208 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:24.208 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.208 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.208 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.208 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.467 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.467 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.467 "name": "raid_bdev1", 00:12:24.467 "uuid": "c12bb8f9-ac59-40cd-8007-a3eab39d96e9", 00:12:24.467 "strip_size_kb": 0, 00:12:24.467 "state": "online", 00:12:24.467 "raid_level": "raid1", 00:12:24.467 "superblock": true, 00:12:24.467 "num_base_bdevs": 4, 00:12:24.467 "num_base_bdevs_discovered": 3, 00:12:24.467 "num_base_bdevs_operational": 3, 00:12:24.467 "process": { 00:12:24.467 "type": "rebuild", 00:12:24.467 "target": "spare", 00:12:24.467 "progress": { 00:12:24.467 "blocks": 16384, 00:12:24.467 "percent": 25 00:12:24.467 } 00:12:24.467 }, 00:12:24.467 "base_bdevs_list": [ 00:12:24.467 { 00:12:24.467 "name": "spare", 00:12:24.467 "uuid": "3a845da6-3d3d-5812-92cf-770720db5f7a", 00:12:24.467 "is_configured": true, 00:12:24.467 "data_offset": 2048, 00:12:24.467 "data_size": 63488 00:12:24.467 }, 00:12:24.467 { 00:12:24.467 "name": null, 00:12:24.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.467 "is_configured": false, 00:12:24.467 "data_offset": 0, 00:12:24.467 "data_size": 63488 00:12:24.467 }, 00:12:24.467 { 00:12:24.467 "name": "BaseBdev3", 00:12:24.467 "uuid": "07d8b050-2c21-5a3f-8ab8-3359685c4ab5", 00:12:24.467 "is_configured": true, 00:12:24.467 "data_offset": 2048, 00:12:24.467 "data_size": 63488 00:12:24.467 }, 00:12:24.467 { 00:12:24.467 "name": "BaseBdev4", 00:12:24.467 "uuid": "d1f94e88-523e-5c10-9df6-e28480dfd77d", 00:12:24.467 "is_configured": true, 00:12:24.467 "data_offset": 2048, 00:12:24.467 "data_size": 63488 00:12:24.467 } 00:12:24.467 ] 00:12:24.467 }' 00:12:24.467 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.467 11:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:24.467 11:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.467 11:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:24.467 11:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:24.467 [2024-12-06 11:55:22.113516] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:24.726 [2024-12-06 11:55:22.340323] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:24.985 [2024-12-06 11:55:22.657501] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:24.985 [2024-12-06 11:55:22.657834] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:25.502 117.80 IOPS, 353.40 MiB/s [2024-12-06T11:55:23.260Z] [2024-12-06 11:55:23.019758] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:25.502 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:25.502 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:25.502 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.502 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:25.502 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:25.502 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.502 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.502 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.502 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.502 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.502 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.502 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.502 "name": "raid_bdev1", 00:12:25.502 "uuid": "c12bb8f9-ac59-40cd-8007-a3eab39d96e9", 00:12:25.502 "strip_size_kb": 0, 00:12:25.502 "state": "online", 00:12:25.502 "raid_level": "raid1", 00:12:25.502 "superblock": true, 00:12:25.502 "num_base_bdevs": 4, 00:12:25.502 "num_base_bdevs_discovered": 3, 00:12:25.502 "num_base_bdevs_operational": 3, 00:12:25.502 "process": { 00:12:25.502 "type": "rebuild", 00:12:25.502 "target": "spare", 00:12:25.502 "progress": { 00:12:25.502 "blocks": 32768, 00:12:25.502 "percent": 51 00:12:25.502 } 00:12:25.502 }, 00:12:25.502 "base_bdevs_list": [ 00:12:25.502 { 00:12:25.502 "name": "spare", 00:12:25.502 "uuid": "3a845da6-3d3d-5812-92cf-770720db5f7a", 00:12:25.502 "is_configured": true, 00:12:25.502 "data_offset": 2048, 00:12:25.502 "data_size": 63488 00:12:25.502 }, 00:12:25.502 { 00:12:25.502 "name": null, 00:12:25.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.502 "is_configured": false, 00:12:25.502 "data_offset": 0, 00:12:25.502 "data_size": 63488 00:12:25.502 }, 00:12:25.502 { 00:12:25.502 "name": "BaseBdev3", 00:12:25.502 "uuid": "07d8b050-2c21-5a3f-8ab8-3359685c4ab5", 00:12:25.502 "is_configured": true, 00:12:25.502 "data_offset": 2048, 00:12:25.502 "data_size": 63488 00:12:25.502 }, 00:12:25.502 { 00:12:25.502 "name": "BaseBdev4", 00:12:25.502 "uuid": "d1f94e88-523e-5c10-9df6-e28480dfd77d", 00:12:25.502 "is_configured": true, 00:12:25.502 "data_offset": 2048, 00:12:25.502 "data_size": 63488 00:12:25.502 } 00:12:25.502 ] 00:12:25.502 }' 00:12:25.502 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.502 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:25.502 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.502 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:25.502 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:25.761 [2024-12-06 11:55:23.389332] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:26.591 106.83 IOPS, 320.50 MiB/s [2024-12-06T11:55:24.349Z] 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:26.591 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:26.591 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.591 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:26.591 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:26.591 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.591 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.591 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.591 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.591 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.591 [2024-12-06 11:55:24.257028] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:12:26.591 [2024-12-06 11:55:24.257313] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:12:26.591 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.591 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.591 "name": "raid_bdev1", 00:12:26.591 "uuid": "c12bb8f9-ac59-40cd-8007-a3eab39d96e9", 00:12:26.591 "strip_size_kb": 0, 00:12:26.591 "state": "online", 00:12:26.591 "raid_level": "raid1", 00:12:26.591 "superblock": true, 00:12:26.591 "num_base_bdevs": 4, 00:12:26.591 "num_base_bdevs_discovered": 3, 00:12:26.591 "num_base_bdevs_operational": 3, 00:12:26.591 "process": { 00:12:26.591 "type": "rebuild", 00:12:26.591 "target": "spare", 00:12:26.591 "progress": { 00:12:26.591 "blocks": 51200, 00:12:26.591 "percent": 80 00:12:26.591 } 00:12:26.591 }, 00:12:26.591 "base_bdevs_list": [ 00:12:26.591 { 00:12:26.591 "name": "spare", 00:12:26.591 "uuid": "3a845da6-3d3d-5812-92cf-770720db5f7a", 00:12:26.591 "is_configured": true, 00:12:26.591 "data_offset": 2048, 00:12:26.591 "data_size": 63488 00:12:26.591 }, 00:12:26.591 { 00:12:26.591 "name": null, 00:12:26.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.591 "is_configured": false, 00:12:26.591 "data_offset": 0, 00:12:26.591 "data_size": 63488 00:12:26.591 }, 00:12:26.591 { 00:12:26.591 "name": "BaseBdev3", 00:12:26.591 "uuid": "07d8b050-2c21-5a3f-8ab8-3359685c4ab5", 00:12:26.591 "is_configured": true, 00:12:26.591 "data_offset": 2048, 00:12:26.591 "data_size": 63488 00:12:26.591 }, 00:12:26.591 { 00:12:26.591 "name": "BaseBdev4", 00:12:26.591 "uuid": "d1f94e88-523e-5c10-9df6-e28480dfd77d", 00:12:26.591 "is_configured": true, 00:12:26.591 "data_offset": 2048, 00:12:26.591 "data_size": 63488 00:12:26.591 } 00:12:26.591 ] 00:12:26.591 }' 00:12:26.591 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.591 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:26.591 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.852 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:26.852 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:26.852 [2024-12-06 11:55:24.585907] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:27.112 [2024-12-06 11:55:24.800590] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:27.639 95.71 IOPS, 287.14 MiB/s [2024-12-06T11:55:25.397Z] [2024-12-06 11:55:25.129276] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:27.639 [2024-12-06 11:55:25.229106] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:27.639 [2024-12-06 11:55:25.236620] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.639 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:27.640 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:27.640 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.640 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:27.640 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:27.640 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.640 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.640 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.640 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.640 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.904 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.904 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.904 "name": "raid_bdev1", 00:12:27.904 "uuid": "c12bb8f9-ac59-40cd-8007-a3eab39d96e9", 00:12:27.904 "strip_size_kb": 0, 00:12:27.904 "state": "online", 00:12:27.904 "raid_level": "raid1", 00:12:27.904 "superblock": true, 00:12:27.904 "num_base_bdevs": 4, 00:12:27.904 "num_base_bdevs_discovered": 3, 00:12:27.904 "num_base_bdevs_operational": 3, 00:12:27.904 "base_bdevs_list": [ 00:12:27.904 { 00:12:27.904 "name": "spare", 00:12:27.904 "uuid": "3a845da6-3d3d-5812-92cf-770720db5f7a", 00:12:27.904 "is_configured": true, 00:12:27.904 "data_offset": 2048, 00:12:27.904 "data_size": 63488 00:12:27.904 }, 00:12:27.904 { 00:12:27.904 "name": null, 00:12:27.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.904 "is_configured": false, 00:12:27.904 "data_offset": 0, 00:12:27.904 "data_size": 63488 00:12:27.904 }, 00:12:27.904 { 00:12:27.904 "name": "BaseBdev3", 00:12:27.904 "uuid": "07d8b050-2c21-5a3f-8ab8-3359685c4ab5", 00:12:27.904 "is_configured": true, 00:12:27.904 "data_offset": 2048, 00:12:27.904 "data_size": 63488 00:12:27.904 }, 00:12:27.904 { 00:12:27.904 "name": "BaseBdev4", 00:12:27.904 "uuid": "d1f94e88-523e-5c10-9df6-e28480dfd77d", 00:12:27.904 "is_configured": true, 00:12:27.905 "data_offset": 2048, 00:12:27.905 "data_size": 63488 00:12:27.905 } 00:12:27.905 ] 00:12:27.905 }' 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.905 "name": "raid_bdev1", 00:12:27.905 "uuid": "c12bb8f9-ac59-40cd-8007-a3eab39d96e9", 00:12:27.905 "strip_size_kb": 0, 00:12:27.905 "state": "online", 00:12:27.905 "raid_level": "raid1", 00:12:27.905 "superblock": true, 00:12:27.905 "num_base_bdevs": 4, 00:12:27.905 "num_base_bdevs_discovered": 3, 00:12:27.905 "num_base_bdevs_operational": 3, 00:12:27.905 "base_bdevs_list": [ 00:12:27.905 { 00:12:27.905 "name": "spare", 00:12:27.905 "uuid": "3a845da6-3d3d-5812-92cf-770720db5f7a", 00:12:27.905 "is_configured": true, 00:12:27.905 "data_offset": 2048, 00:12:27.905 "data_size": 63488 00:12:27.905 }, 00:12:27.905 { 00:12:27.905 "name": null, 00:12:27.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.905 "is_configured": false, 00:12:27.905 "data_offset": 0, 00:12:27.905 "data_size": 63488 00:12:27.905 }, 00:12:27.905 { 00:12:27.905 "name": "BaseBdev3", 00:12:27.905 "uuid": "07d8b050-2c21-5a3f-8ab8-3359685c4ab5", 00:12:27.905 "is_configured": true, 00:12:27.905 "data_offset": 2048, 00:12:27.905 "data_size": 63488 00:12:27.905 }, 00:12:27.905 { 00:12:27.905 "name": "BaseBdev4", 00:12:27.905 "uuid": "d1f94e88-523e-5c10-9df6-e28480dfd77d", 00:12:27.905 "is_configured": true, 00:12:27.905 "data_offset": 2048, 00:12:27.905 "data_size": 63488 00:12:27.905 } 00:12:27.905 ] 00:12:27.905 }' 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.905 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.164 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.164 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.164 "name": "raid_bdev1", 00:12:28.164 "uuid": "c12bb8f9-ac59-40cd-8007-a3eab39d96e9", 00:12:28.164 "strip_size_kb": 0, 00:12:28.164 "state": "online", 00:12:28.164 "raid_level": "raid1", 00:12:28.164 "superblock": true, 00:12:28.164 "num_base_bdevs": 4, 00:12:28.164 "num_base_bdevs_discovered": 3, 00:12:28.164 "num_base_bdevs_operational": 3, 00:12:28.164 "base_bdevs_list": [ 00:12:28.164 { 00:12:28.164 "name": "spare", 00:12:28.164 "uuid": "3a845da6-3d3d-5812-92cf-770720db5f7a", 00:12:28.164 "is_configured": true, 00:12:28.164 "data_offset": 2048, 00:12:28.164 "data_size": 63488 00:12:28.164 }, 00:12:28.164 { 00:12:28.164 "name": null, 00:12:28.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.164 "is_configured": false, 00:12:28.164 "data_offset": 0, 00:12:28.164 "data_size": 63488 00:12:28.164 }, 00:12:28.164 { 00:12:28.164 "name": "BaseBdev3", 00:12:28.164 "uuid": "07d8b050-2c21-5a3f-8ab8-3359685c4ab5", 00:12:28.164 "is_configured": true, 00:12:28.164 "data_offset": 2048, 00:12:28.164 "data_size": 63488 00:12:28.164 }, 00:12:28.164 { 00:12:28.164 "name": "BaseBdev4", 00:12:28.164 "uuid": "d1f94e88-523e-5c10-9df6-e28480dfd77d", 00:12:28.164 "is_configured": true, 00:12:28.164 "data_offset": 2048, 00:12:28.164 "data_size": 63488 00:12:28.164 } 00:12:28.164 ] 00:12:28.164 }' 00:12:28.164 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.164 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.423 88.38 IOPS, 265.12 MiB/s [2024-12-06T11:55:26.181Z] 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:28.423 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.423 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.423 [2024-12-06 11:55:26.078216] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:28.423 [2024-12-06 11:55:26.078334] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:28.423 00:12:28.423 Latency(us) 00:12:28.423 [2024-12-06T11:55:26.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:28.423 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:28.423 raid_bdev1 : 8.20 86.68 260.05 0.00 0.00 15103.76 284.39 116762.83 00:12:28.423 [2024-12-06T11:55:26.181Z] =================================================================================================================== 00:12:28.423 [2024-12-06T11:55:26.182Z] Total : 86.68 260.05 0.00 0.00 15103.76 284.39 116762.83 00:12:28.424 [2024-12-06 11:55:26.105400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.424 [2024-12-06 11:55:26.105474] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:28.424 [2024-12-06 11:55:26.105611] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:28.424 [2024-12-06 11:55:26.105656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:28.424 { 00:12:28.424 "results": [ 00:12:28.424 { 00:12:28.424 "job": "raid_bdev1", 00:12:28.424 "core_mask": "0x1", 00:12:28.424 "workload": "randrw", 00:12:28.424 "percentage": 50, 00:12:28.424 "status": "finished", 00:12:28.424 "queue_depth": 2, 00:12:28.424 "io_size": 3145728, 00:12:28.424 "runtime": 8.202147, 00:12:28.424 "iops": 86.68462050241236, 00:12:28.424 "mibps": 260.05386150723706, 00:12:28.424 "io_failed": 0, 00:12:28.424 "io_timeout": 0, 00:12:28.424 "avg_latency_us": 15103.755664879405, 00:12:28.424 "min_latency_us": 284.3947598253275, 00:12:28.424 "max_latency_us": 116762.82969432314 00:12:28.424 } 00:12:28.424 ], 00:12:28.424 "core_count": 1 00:12:28.424 } 00:12:28.424 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.424 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.424 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.424 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.424 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:28.424 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.424 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:28.424 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:28.424 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:28.424 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:28.424 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:28.424 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:28.424 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:28.424 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:28.424 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:28.424 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:28.424 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:28.424 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:28.424 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:28.683 /dev/nbd0 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.683 1+0 records in 00:12:28.683 1+0 records out 00:12:28.683 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530928 s, 7.7 MB/s 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:28.683 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:12:28.942 /dev/nbd1 00:12:28.942 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:28.942 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:28.942 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:28.942 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:28.942 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:28.942 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:28.942 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:28.942 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:28.942 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:28.942 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:28.942 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.942 1+0 records in 00:12:28.942 1+0 records out 00:12:28.942 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269449 s, 15.2 MB/s 00:12:28.942 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.942 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:28.942 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.942 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:28.942 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:28.942 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:28.942 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:28.942 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:29.223 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:29.223 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:29.223 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:29.223 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:29.223 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:29.223 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.223 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:29.223 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:29.223 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:29.223 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:29.223 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:29.223 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:29.223 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:29.223 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:29.223 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:29.223 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:29.223 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:12:29.223 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:12:29.223 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:29.223 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:12:29.223 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:29.223 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:29.223 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:29.223 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:29.223 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:29.223 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:29.224 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:12:29.483 /dev/nbd1 00:12:29.483 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:29.483 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:29.483 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:29.483 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:29.483 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:29.483 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:29.483 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:29.483 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:29.483 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:29.483 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:29.483 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.483 1+0 records in 00:12:29.483 1+0 records out 00:12:29.483 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530511 s, 7.7 MB/s 00:12:29.483 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.483 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:29.483 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.483 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:29.483 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:29.483 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:29.483 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:29.483 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:29.483 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:29.483 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:29.483 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:29.483 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:29.483 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:29.483 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.483 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:29.743 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:29.743 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:29.743 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:29.743 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:29.743 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:29.743 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:29.743 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:29.743 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:29.743 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:29.743 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:29.743 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:29.743 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:29.743 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:29.743 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.743 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.003 [2024-12-06 11:55:27.605093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:30.003 [2024-12-06 11:55:27.605157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.003 [2024-12-06 11:55:27.605182] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:30.003 [2024-12-06 11:55:27.605190] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.003 [2024-12-06 11:55:27.607396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.003 [2024-12-06 11:55:27.607437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:30.003 [2024-12-06 11:55:27.607523] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:30.003 [2024-12-06 11:55:27.607568] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:30.003 [2024-12-06 11:55:27.607676] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:30.003 [2024-12-06 11:55:27.607771] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:30.003 spare 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.003 [2024-12-06 11:55:27.707671] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:12:30.003 [2024-12-06 11:55:27.707696] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:30.003 [2024-12-06 11:55:27.707989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000337b0 00:12:30.003 [2024-12-06 11:55:27.708138] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:12:30.003 [2024-12-06 11:55:27.708152] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:12:30.003 [2024-12-06 11:55:27.708314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.003 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.263 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.263 "name": "raid_bdev1", 00:12:30.263 "uuid": "c12bb8f9-ac59-40cd-8007-a3eab39d96e9", 00:12:30.263 "strip_size_kb": 0, 00:12:30.263 "state": "online", 00:12:30.263 "raid_level": "raid1", 00:12:30.263 "superblock": true, 00:12:30.263 "num_base_bdevs": 4, 00:12:30.263 "num_base_bdevs_discovered": 3, 00:12:30.263 "num_base_bdevs_operational": 3, 00:12:30.263 "base_bdevs_list": [ 00:12:30.263 { 00:12:30.263 "name": "spare", 00:12:30.263 "uuid": "3a845da6-3d3d-5812-92cf-770720db5f7a", 00:12:30.263 "is_configured": true, 00:12:30.263 "data_offset": 2048, 00:12:30.263 "data_size": 63488 00:12:30.263 }, 00:12:30.263 { 00:12:30.263 "name": null, 00:12:30.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.263 "is_configured": false, 00:12:30.263 "data_offset": 2048, 00:12:30.263 "data_size": 63488 00:12:30.263 }, 00:12:30.263 { 00:12:30.263 "name": "BaseBdev3", 00:12:30.263 "uuid": "07d8b050-2c21-5a3f-8ab8-3359685c4ab5", 00:12:30.263 "is_configured": true, 00:12:30.263 "data_offset": 2048, 00:12:30.263 "data_size": 63488 00:12:30.263 }, 00:12:30.263 { 00:12:30.263 "name": "BaseBdev4", 00:12:30.263 "uuid": "d1f94e88-523e-5c10-9df6-e28480dfd77d", 00:12:30.263 "is_configured": true, 00:12:30.263 "data_offset": 2048, 00:12:30.263 "data_size": 63488 00:12:30.263 } 00:12:30.263 ] 00:12:30.263 }' 00:12:30.263 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.263 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.522 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:30.522 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.522 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:30.522 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:30.522 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.522 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.522 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.522 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.522 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.522 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.522 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.522 "name": "raid_bdev1", 00:12:30.523 "uuid": "c12bb8f9-ac59-40cd-8007-a3eab39d96e9", 00:12:30.523 "strip_size_kb": 0, 00:12:30.523 "state": "online", 00:12:30.523 "raid_level": "raid1", 00:12:30.523 "superblock": true, 00:12:30.523 "num_base_bdevs": 4, 00:12:30.523 "num_base_bdevs_discovered": 3, 00:12:30.523 "num_base_bdevs_operational": 3, 00:12:30.523 "base_bdevs_list": [ 00:12:30.523 { 00:12:30.523 "name": "spare", 00:12:30.523 "uuid": "3a845da6-3d3d-5812-92cf-770720db5f7a", 00:12:30.523 "is_configured": true, 00:12:30.523 "data_offset": 2048, 00:12:30.523 "data_size": 63488 00:12:30.523 }, 00:12:30.523 { 00:12:30.523 "name": null, 00:12:30.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.523 "is_configured": false, 00:12:30.523 "data_offset": 2048, 00:12:30.523 "data_size": 63488 00:12:30.523 }, 00:12:30.523 { 00:12:30.523 "name": "BaseBdev3", 00:12:30.523 "uuid": "07d8b050-2c21-5a3f-8ab8-3359685c4ab5", 00:12:30.523 "is_configured": true, 00:12:30.523 "data_offset": 2048, 00:12:30.523 "data_size": 63488 00:12:30.523 }, 00:12:30.523 { 00:12:30.523 "name": "BaseBdev4", 00:12:30.523 "uuid": "d1f94e88-523e-5c10-9df6-e28480dfd77d", 00:12:30.523 "is_configured": true, 00:12:30.523 "data_offset": 2048, 00:12:30.523 "data_size": 63488 00:12:30.523 } 00:12:30.523 ] 00:12:30.523 }' 00:12:30.523 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.523 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:30.523 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.783 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:30.783 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.783 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:30.783 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.783 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.783 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.783 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:30.783 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:30.783 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.783 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.783 [2024-12-06 11:55:28.347941] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:30.783 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.783 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:30.783 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.783 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.783 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.783 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.783 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:30.783 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.783 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.783 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.783 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.783 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.783 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.783 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.783 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.783 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.783 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.783 "name": "raid_bdev1", 00:12:30.783 "uuid": "c12bb8f9-ac59-40cd-8007-a3eab39d96e9", 00:12:30.783 "strip_size_kb": 0, 00:12:30.783 "state": "online", 00:12:30.783 "raid_level": "raid1", 00:12:30.783 "superblock": true, 00:12:30.783 "num_base_bdevs": 4, 00:12:30.783 "num_base_bdevs_discovered": 2, 00:12:30.783 "num_base_bdevs_operational": 2, 00:12:30.783 "base_bdevs_list": [ 00:12:30.783 { 00:12:30.783 "name": null, 00:12:30.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.783 "is_configured": false, 00:12:30.783 "data_offset": 0, 00:12:30.783 "data_size": 63488 00:12:30.783 }, 00:12:30.783 { 00:12:30.783 "name": null, 00:12:30.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.783 "is_configured": false, 00:12:30.783 "data_offset": 2048, 00:12:30.783 "data_size": 63488 00:12:30.783 }, 00:12:30.783 { 00:12:30.783 "name": "BaseBdev3", 00:12:30.783 "uuid": "07d8b050-2c21-5a3f-8ab8-3359685c4ab5", 00:12:30.783 "is_configured": true, 00:12:30.783 "data_offset": 2048, 00:12:30.783 "data_size": 63488 00:12:30.783 }, 00:12:30.783 { 00:12:30.783 "name": "BaseBdev4", 00:12:30.783 "uuid": "d1f94e88-523e-5c10-9df6-e28480dfd77d", 00:12:30.783 "is_configured": true, 00:12:30.783 "data_offset": 2048, 00:12:30.783 "data_size": 63488 00:12:30.783 } 00:12:30.783 ] 00:12:30.783 }' 00:12:30.783 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.783 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.042 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:31.042 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.042 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.302 [2024-12-06 11:55:28.799358] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:31.302 [2024-12-06 11:55:28.799591] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:31.302 [2024-12-06 11:55:28.799651] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:31.302 [2024-12-06 11:55:28.799716] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:31.302 [2024-12-06 11:55:28.803355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033880 00:12:31.302 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.302 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:31.302 [2024-12-06 11:55:28.805214] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:32.243 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:32.243 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.243 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:32.243 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:32.243 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.243 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.243 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.243 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.243 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.243 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.243 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.243 "name": "raid_bdev1", 00:12:32.243 "uuid": "c12bb8f9-ac59-40cd-8007-a3eab39d96e9", 00:12:32.243 "strip_size_kb": 0, 00:12:32.243 "state": "online", 00:12:32.243 "raid_level": "raid1", 00:12:32.243 "superblock": true, 00:12:32.243 "num_base_bdevs": 4, 00:12:32.243 "num_base_bdevs_discovered": 3, 00:12:32.243 "num_base_bdevs_operational": 3, 00:12:32.243 "process": { 00:12:32.243 "type": "rebuild", 00:12:32.243 "target": "spare", 00:12:32.243 "progress": { 00:12:32.243 "blocks": 20480, 00:12:32.243 "percent": 32 00:12:32.243 } 00:12:32.243 }, 00:12:32.243 "base_bdevs_list": [ 00:12:32.243 { 00:12:32.243 "name": "spare", 00:12:32.243 "uuid": "3a845da6-3d3d-5812-92cf-770720db5f7a", 00:12:32.243 "is_configured": true, 00:12:32.243 "data_offset": 2048, 00:12:32.243 "data_size": 63488 00:12:32.243 }, 00:12:32.243 { 00:12:32.243 "name": null, 00:12:32.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.243 "is_configured": false, 00:12:32.243 "data_offset": 2048, 00:12:32.243 "data_size": 63488 00:12:32.243 }, 00:12:32.243 { 00:12:32.243 "name": "BaseBdev3", 00:12:32.243 "uuid": "07d8b050-2c21-5a3f-8ab8-3359685c4ab5", 00:12:32.243 "is_configured": true, 00:12:32.243 "data_offset": 2048, 00:12:32.243 "data_size": 63488 00:12:32.243 }, 00:12:32.243 { 00:12:32.243 "name": "BaseBdev4", 00:12:32.243 "uuid": "d1f94e88-523e-5c10-9df6-e28480dfd77d", 00:12:32.243 "is_configured": true, 00:12:32.243 "data_offset": 2048, 00:12:32.243 "data_size": 63488 00:12:32.243 } 00:12:32.243 ] 00:12:32.243 }' 00:12:32.243 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.243 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:32.243 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.243 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:32.243 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:32.243 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.243 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.243 [2024-12-06 11:55:29.970165] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:32.503 [2024-12-06 11:55:30.009205] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:32.503 [2024-12-06 11:55:30.009265] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.503 [2024-12-06 11:55:30.009283] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:32.503 [2024-12-06 11:55:30.009290] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:32.503 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.503 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:32.503 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.503 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.503 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.503 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.503 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:32.503 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.503 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.503 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.503 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.503 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.503 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.503 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.503 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.503 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.503 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.503 "name": "raid_bdev1", 00:12:32.503 "uuid": "c12bb8f9-ac59-40cd-8007-a3eab39d96e9", 00:12:32.503 "strip_size_kb": 0, 00:12:32.503 "state": "online", 00:12:32.503 "raid_level": "raid1", 00:12:32.503 "superblock": true, 00:12:32.503 "num_base_bdevs": 4, 00:12:32.503 "num_base_bdevs_discovered": 2, 00:12:32.503 "num_base_bdevs_operational": 2, 00:12:32.503 "base_bdevs_list": [ 00:12:32.503 { 00:12:32.503 "name": null, 00:12:32.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.503 "is_configured": false, 00:12:32.503 "data_offset": 0, 00:12:32.503 "data_size": 63488 00:12:32.503 }, 00:12:32.503 { 00:12:32.503 "name": null, 00:12:32.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.503 "is_configured": false, 00:12:32.503 "data_offset": 2048, 00:12:32.503 "data_size": 63488 00:12:32.503 }, 00:12:32.503 { 00:12:32.503 "name": "BaseBdev3", 00:12:32.503 "uuid": "07d8b050-2c21-5a3f-8ab8-3359685c4ab5", 00:12:32.503 "is_configured": true, 00:12:32.503 "data_offset": 2048, 00:12:32.503 "data_size": 63488 00:12:32.503 }, 00:12:32.503 { 00:12:32.503 "name": "BaseBdev4", 00:12:32.503 "uuid": "d1f94e88-523e-5c10-9df6-e28480dfd77d", 00:12:32.503 "is_configured": true, 00:12:32.503 "data_offset": 2048, 00:12:32.503 "data_size": 63488 00:12:32.503 } 00:12:32.503 ] 00:12:32.503 }' 00:12:32.503 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.503 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.763 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:32.763 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.763 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.763 [2024-12-06 11:55:30.488559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:32.763 [2024-12-06 11:55:30.488671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.763 [2024-12-06 11:55:30.488717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:32.763 [2024-12-06 11:55:30.488746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.763 [2024-12-06 11:55:30.489191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.763 [2024-12-06 11:55:30.489260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:32.763 [2024-12-06 11:55:30.489371] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:32.763 [2024-12-06 11:55:30.489409] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:32.763 [2024-12-06 11:55:30.489453] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:32.763 [2024-12-06 11:55:30.489516] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:32.763 [2024-12-06 11:55:30.492700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033950 00:12:32.763 spare 00:12:32.763 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.763 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:32.763 [2024-12-06 11:55:30.494572] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.145 "name": "raid_bdev1", 00:12:34.145 "uuid": "c12bb8f9-ac59-40cd-8007-a3eab39d96e9", 00:12:34.145 "strip_size_kb": 0, 00:12:34.145 "state": "online", 00:12:34.145 "raid_level": "raid1", 00:12:34.145 "superblock": true, 00:12:34.145 "num_base_bdevs": 4, 00:12:34.145 "num_base_bdevs_discovered": 3, 00:12:34.145 "num_base_bdevs_operational": 3, 00:12:34.145 "process": { 00:12:34.145 "type": "rebuild", 00:12:34.145 "target": "spare", 00:12:34.145 "progress": { 00:12:34.145 "blocks": 20480, 00:12:34.145 "percent": 32 00:12:34.145 } 00:12:34.145 }, 00:12:34.145 "base_bdevs_list": [ 00:12:34.145 { 00:12:34.145 "name": "spare", 00:12:34.145 "uuid": "3a845da6-3d3d-5812-92cf-770720db5f7a", 00:12:34.145 "is_configured": true, 00:12:34.145 "data_offset": 2048, 00:12:34.145 "data_size": 63488 00:12:34.145 }, 00:12:34.145 { 00:12:34.145 "name": null, 00:12:34.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.145 "is_configured": false, 00:12:34.145 "data_offset": 2048, 00:12:34.145 "data_size": 63488 00:12:34.145 }, 00:12:34.145 { 00:12:34.145 "name": "BaseBdev3", 00:12:34.145 "uuid": "07d8b050-2c21-5a3f-8ab8-3359685c4ab5", 00:12:34.145 "is_configured": true, 00:12:34.145 "data_offset": 2048, 00:12:34.145 "data_size": 63488 00:12:34.145 }, 00:12:34.145 { 00:12:34.145 "name": "BaseBdev4", 00:12:34.145 "uuid": "d1f94e88-523e-5c10-9df6-e28480dfd77d", 00:12:34.145 "is_configured": true, 00:12:34.145 "data_offset": 2048, 00:12:34.145 "data_size": 63488 00:12:34.145 } 00:12:34.145 ] 00:12:34.145 }' 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.145 [2024-12-06 11:55:31.659507] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:34.145 [2024-12-06 11:55:31.698635] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:34.145 [2024-12-06 11:55:31.698740] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.145 [2024-12-06 11:55:31.698774] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:34.145 [2024-12-06 11:55:31.698795] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.145 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.145 "name": "raid_bdev1", 00:12:34.145 "uuid": "c12bb8f9-ac59-40cd-8007-a3eab39d96e9", 00:12:34.145 "strip_size_kb": 0, 00:12:34.145 "state": "online", 00:12:34.145 "raid_level": "raid1", 00:12:34.145 "superblock": true, 00:12:34.145 "num_base_bdevs": 4, 00:12:34.145 "num_base_bdevs_discovered": 2, 00:12:34.145 "num_base_bdevs_operational": 2, 00:12:34.145 "base_bdevs_list": [ 00:12:34.145 { 00:12:34.145 "name": null, 00:12:34.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.145 "is_configured": false, 00:12:34.146 "data_offset": 0, 00:12:34.146 "data_size": 63488 00:12:34.146 }, 00:12:34.146 { 00:12:34.146 "name": null, 00:12:34.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.146 "is_configured": false, 00:12:34.146 "data_offset": 2048, 00:12:34.146 "data_size": 63488 00:12:34.146 }, 00:12:34.146 { 00:12:34.146 "name": "BaseBdev3", 00:12:34.146 "uuid": "07d8b050-2c21-5a3f-8ab8-3359685c4ab5", 00:12:34.146 "is_configured": true, 00:12:34.146 "data_offset": 2048, 00:12:34.146 "data_size": 63488 00:12:34.146 }, 00:12:34.146 { 00:12:34.146 "name": "BaseBdev4", 00:12:34.146 "uuid": "d1f94e88-523e-5c10-9df6-e28480dfd77d", 00:12:34.146 "is_configured": true, 00:12:34.146 "data_offset": 2048, 00:12:34.146 "data_size": 63488 00:12:34.146 } 00:12:34.146 ] 00:12:34.146 }' 00:12:34.146 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.146 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.739 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:34.739 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.739 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:34.739 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:34.739 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.739 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.739 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.739 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.739 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.739 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.739 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.739 "name": "raid_bdev1", 00:12:34.739 "uuid": "c12bb8f9-ac59-40cd-8007-a3eab39d96e9", 00:12:34.739 "strip_size_kb": 0, 00:12:34.739 "state": "online", 00:12:34.739 "raid_level": "raid1", 00:12:34.739 "superblock": true, 00:12:34.739 "num_base_bdevs": 4, 00:12:34.739 "num_base_bdevs_discovered": 2, 00:12:34.739 "num_base_bdevs_operational": 2, 00:12:34.739 "base_bdevs_list": [ 00:12:34.739 { 00:12:34.739 "name": null, 00:12:34.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.739 "is_configured": false, 00:12:34.739 "data_offset": 0, 00:12:34.739 "data_size": 63488 00:12:34.739 }, 00:12:34.739 { 00:12:34.739 "name": null, 00:12:34.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.739 "is_configured": false, 00:12:34.739 "data_offset": 2048, 00:12:34.739 "data_size": 63488 00:12:34.739 }, 00:12:34.739 { 00:12:34.739 "name": "BaseBdev3", 00:12:34.739 "uuid": "07d8b050-2c21-5a3f-8ab8-3359685c4ab5", 00:12:34.739 "is_configured": true, 00:12:34.739 "data_offset": 2048, 00:12:34.739 "data_size": 63488 00:12:34.739 }, 00:12:34.739 { 00:12:34.739 "name": "BaseBdev4", 00:12:34.739 "uuid": "d1f94e88-523e-5c10-9df6-e28480dfd77d", 00:12:34.739 "is_configured": true, 00:12:34.739 "data_offset": 2048, 00:12:34.739 "data_size": 63488 00:12:34.739 } 00:12:34.739 ] 00:12:34.739 }' 00:12:34.739 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.739 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:34.739 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.739 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:34.739 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:34.739 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.739 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.739 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.739 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:34.739 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.739 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.739 [2024-12-06 11:55:32.349624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:34.739 [2024-12-06 11:55:32.349698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.739 [2024-12-06 11:55:32.349720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:34.739 [2024-12-06 11:55:32.349729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.739 [2024-12-06 11:55:32.350102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.739 [2024-12-06 11:55:32.350119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:34.739 [2024-12-06 11:55:32.350185] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:34.739 [2024-12-06 11:55:32.350205] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:34.739 [2024-12-06 11:55:32.350221] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:34.739 [2024-12-06 11:55:32.350252] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:34.739 BaseBdev1 00:12:34.739 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.739 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:35.679 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:35.679 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.679 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.679 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.679 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.679 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:35.679 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.679 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.679 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.679 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.679 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.679 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.679 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.679 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.679 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.679 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.679 "name": "raid_bdev1", 00:12:35.679 "uuid": "c12bb8f9-ac59-40cd-8007-a3eab39d96e9", 00:12:35.679 "strip_size_kb": 0, 00:12:35.679 "state": "online", 00:12:35.679 "raid_level": "raid1", 00:12:35.679 "superblock": true, 00:12:35.679 "num_base_bdevs": 4, 00:12:35.679 "num_base_bdevs_discovered": 2, 00:12:35.679 "num_base_bdevs_operational": 2, 00:12:35.679 "base_bdevs_list": [ 00:12:35.679 { 00:12:35.679 "name": null, 00:12:35.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.679 "is_configured": false, 00:12:35.679 "data_offset": 0, 00:12:35.679 "data_size": 63488 00:12:35.679 }, 00:12:35.679 { 00:12:35.679 "name": null, 00:12:35.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.679 "is_configured": false, 00:12:35.679 "data_offset": 2048, 00:12:35.679 "data_size": 63488 00:12:35.679 }, 00:12:35.679 { 00:12:35.679 "name": "BaseBdev3", 00:12:35.679 "uuid": "07d8b050-2c21-5a3f-8ab8-3359685c4ab5", 00:12:35.679 "is_configured": true, 00:12:35.679 "data_offset": 2048, 00:12:35.679 "data_size": 63488 00:12:35.679 }, 00:12:35.679 { 00:12:35.679 "name": "BaseBdev4", 00:12:35.679 "uuid": "d1f94e88-523e-5c10-9df6-e28480dfd77d", 00:12:35.679 "is_configured": true, 00:12:35.679 "data_offset": 2048, 00:12:35.679 "data_size": 63488 00:12:35.679 } 00:12:35.679 ] 00:12:35.679 }' 00:12:35.679 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.679 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.249 "name": "raid_bdev1", 00:12:36.249 "uuid": "c12bb8f9-ac59-40cd-8007-a3eab39d96e9", 00:12:36.249 "strip_size_kb": 0, 00:12:36.249 "state": "online", 00:12:36.249 "raid_level": "raid1", 00:12:36.249 "superblock": true, 00:12:36.249 "num_base_bdevs": 4, 00:12:36.249 "num_base_bdevs_discovered": 2, 00:12:36.249 "num_base_bdevs_operational": 2, 00:12:36.249 "base_bdevs_list": [ 00:12:36.249 { 00:12:36.249 "name": null, 00:12:36.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.249 "is_configured": false, 00:12:36.249 "data_offset": 0, 00:12:36.249 "data_size": 63488 00:12:36.249 }, 00:12:36.249 { 00:12:36.249 "name": null, 00:12:36.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.249 "is_configured": false, 00:12:36.249 "data_offset": 2048, 00:12:36.249 "data_size": 63488 00:12:36.249 }, 00:12:36.249 { 00:12:36.249 "name": "BaseBdev3", 00:12:36.249 "uuid": "07d8b050-2c21-5a3f-8ab8-3359685c4ab5", 00:12:36.249 "is_configured": true, 00:12:36.249 "data_offset": 2048, 00:12:36.249 "data_size": 63488 00:12:36.249 }, 00:12:36.249 { 00:12:36.249 "name": "BaseBdev4", 00:12:36.249 "uuid": "d1f94e88-523e-5c10-9df6-e28480dfd77d", 00:12:36.249 "is_configured": true, 00:12:36.249 "data_offset": 2048, 00:12:36.249 "data_size": 63488 00:12:36.249 } 00:12:36.249 ] 00:12:36.249 }' 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.249 [2024-12-06 11:55:33.987037] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:36.249 [2024-12-06 11:55:33.987190] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:36.249 [2024-12-06 11:55:33.987204] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:36.249 request: 00:12:36.249 { 00:12:36.249 "base_bdev": "BaseBdev1", 00:12:36.249 "raid_bdev": "raid_bdev1", 00:12:36.249 "method": "bdev_raid_add_base_bdev", 00:12:36.249 "req_id": 1 00:12:36.249 } 00:12:36.249 Got JSON-RPC error response 00:12:36.249 response: 00:12:36.249 { 00:12:36.249 "code": -22, 00:12:36.249 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:36.249 } 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:36.249 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:37.631 11:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:37.631 11:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.631 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.631 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.631 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.631 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:37.631 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.631 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.631 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.631 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.631 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.631 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.631 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.631 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.631 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.631 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.631 "name": "raid_bdev1", 00:12:37.631 "uuid": "c12bb8f9-ac59-40cd-8007-a3eab39d96e9", 00:12:37.631 "strip_size_kb": 0, 00:12:37.631 "state": "online", 00:12:37.631 "raid_level": "raid1", 00:12:37.631 "superblock": true, 00:12:37.631 "num_base_bdevs": 4, 00:12:37.631 "num_base_bdevs_discovered": 2, 00:12:37.631 "num_base_bdevs_operational": 2, 00:12:37.631 "base_bdevs_list": [ 00:12:37.631 { 00:12:37.631 "name": null, 00:12:37.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.631 "is_configured": false, 00:12:37.631 "data_offset": 0, 00:12:37.631 "data_size": 63488 00:12:37.631 }, 00:12:37.631 { 00:12:37.631 "name": null, 00:12:37.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.631 "is_configured": false, 00:12:37.631 "data_offset": 2048, 00:12:37.631 "data_size": 63488 00:12:37.631 }, 00:12:37.631 { 00:12:37.631 "name": "BaseBdev3", 00:12:37.631 "uuid": "07d8b050-2c21-5a3f-8ab8-3359685c4ab5", 00:12:37.631 "is_configured": true, 00:12:37.631 "data_offset": 2048, 00:12:37.631 "data_size": 63488 00:12:37.631 }, 00:12:37.631 { 00:12:37.631 "name": "BaseBdev4", 00:12:37.631 "uuid": "d1f94e88-523e-5c10-9df6-e28480dfd77d", 00:12:37.631 "is_configured": true, 00:12:37.631 "data_offset": 2048, 00:12:37.631 "data_size": 63488 00:12:37.631 } 00:12:37.631 ] 00:12:37.631 }' 00:12:37.631 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.631 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.891 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:37.891 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.891 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:37.891 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:37.891 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.891 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.891 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.891 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.891 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.891 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.891 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.891 "name": "raid_bdev1", 00:12:37.891 "uuid": "c12bb8f9-ac59-40cd-8007-a3eab39d96e9", 00:12:37.891 "strip_size_kb": 0, 00:12:37.891 "state": "online", 00:12:37.891 "raid_level": "raid1", 00:12:37.891 "superblock": true, 00:12:37.891 "num_base_bdevs": 4, 00:12:37.891 "num_base_bdevs_discovered": 2, 00:12:37.891 "num_base_bdevs_operational": 2, 00:12:37.891 "base_bdevs_list": [ 00:12:37.891 { 00:12:37.891 "name": null, 00:12:37.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.891 "is_configured": false, 00:12:37.891 "data_offset": 0, 00:12:37.891 "data_size": 63488 00:12:37.891 }, 00:12:37.891 { 00:12:37.891 "name": null, 00:12:37.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.891 "is_configured": false, 00:12:37.891 "data_offset": 2048, 00:12:37.891 "data_size": 63488 00:12:37.891 }, 00:12:37.891 { 00:12:37.891 "name": "BaseBdev3", 00:12:37.891 "uuid": "07d8b050-2c21-5a3f-8ab8-3359685c4ab5", 00:12:37.891 "is_configured": true, 00:12:37.891 "data_offset": 2048, 00:12:37.891 "data_size": 63488 00:12:37.891 }, 00:12:37.891 { 00:12:37.891 "name": "BaseBdev4", 00:12:37.891 "uuid": "d1f94e88-523e-5c10-9df6-e28480dfd77d", 00:12:37.891 "is_configured": true, 00:12:37.891 "data_offset": 2048, 00:12:37.891 "data_size": 63488 00:12:37.891 } 00:12:37.891 ] 00:12:37.891 }' 00:12:37.891 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.891 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:37.891 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.891 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:37.891 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 89343 00:12:37.891 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 89343 ']' 00:12:37.891 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 89343 00:12:37.891 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:12:37.891 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:37.891 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89343 00:12:37.891 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:37.891 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:37.891 killing process with pid 89343 00:12:37.891 Received shutdown signal, test time was about 17.748747 seconds 00:12:37.891 00:12:37.891 Latency(us) 00:12:37.891 [2024-12-06T11:55:35.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.891 [2024-12-06T11:55:35.649Z] =================================================================================================================== 00:12:37.891 [2024-12-06T11:55:35.649Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:37.891 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89343' 00:12:37.891 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 89343 00:12:37.891 [2024-12-06 11:55:35.630323] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:37.891 [2024-12-06 11:55:35.630437] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:37.891 [2024-12-06 11:55:35.630501] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:37.891 [2024-12-06 11:55:35.630511] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:12:37.891 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 89343 00:12:38.151 [2024-12-06 11:55:35.677258] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:38.410 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:38.410 00:12:38.410 real 0m19.673s 00:12:38.410 user 0m26.188s 00:12:38.410 sys 0m2.478s 00:12:38.410 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:38.410 ************************************ 00:12:38.410 END TEST raid_rebuild_test_sb_io 00:12:38.410 ************************************ 00:12:38.410 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.410 11:55:35 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:12:38.410 11:55:35 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:12:38.410 11:55:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:38.410 11:55:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:38.410 11:55:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:38.410 ************************************ 00:12:38.410 START TEST raid5f_state_function_test 00:12:38.411 ************************************ 00:12:38.411 11:55:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:12:38.411 11:55:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:12:38.411 11:55:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:38.411 11:55:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:38.411 11:55:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:38.411 11:55:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:38.411 11:55:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:38.411 11:55:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:38.411 11:55:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:38.411 11:55:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:38.411 11:55:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:38.411 11:55:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:38.411 11:55:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:38.411 11:55:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:38.411 11:55:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:38.411 11:55:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:38.411 11:55:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:38.411 11:55:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:38.411 11:55:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:38.411 11:55:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:38.411 11:55:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:38.411 11:55:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:38.411 11:55:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:12:38.411 11:55:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:38.411 11:55:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:38.411 11:55:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:38.411 11:55:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:38.411 11:55:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=90048 00:12:38.411 11:55:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:38.411 Process raid pid: 90048 00:12:38.411 11:55:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90048' 00:12:38.411 11:55:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 90048 00:12:38.411 11:55:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 90048 ']' 00:12:38.411 11:55:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.411 11:55:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:38.411 11:55:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.411 11:55:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:38.411 11:55:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.411 [2024-12-06 11:55:36.091643] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:38.411 [2024-12-06 11:55:36.091788] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.670 [2024-12-06 11:55:36.236954] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.670 [2024-12-06 11:55:36.282181] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.670 [2024-12-06 11:55:36.324799] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.670 [2024-12-06 11:55:36.324837] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.239 11:55:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:39.239 11:55:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:12:39.239 11:55:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:39.239 11:55:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.239 11:55:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.239 [2024-12-06 11:55:36.902070] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:39.239 [2024-12-06 11:55:36.902122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:39.239 [2024-12-06 11:55:36.902144] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:39.239 [2024-12-06 11:55:36.902155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:39.239 [2024-12-06 11:55:36.902161] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:39.239 [2024-12-06 11:55:36.902172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:39.239 11:55:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.239 11:55:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:39.239 11:55:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.239 11:55:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.239 11:55:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:39.239 11:55:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.239 11:55:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:39.239 11:55:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.239 11:55:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.239 11:55:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.239 11:55:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.239 11:55:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.239 11:55:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.239 11:55:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.239 11:55:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.239 11:55:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.239 11:55:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.239 "name": "Existed_Raid", 00:12:39.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.239 "strip_size_kb": 64, 00:12:39.239 "state": "configuring", 00:12:39.239 "raid_level": "raid5f", 00:12:39.239 "superblock": false, 00:12:39.239 "num_base_bdevs": 3, 00:12:39.239 "num_base_bdevs_discovered": 0, 00:12:39.239 "num_base_bdevs_operational": 3, 00:12:39.239 "base_bdevs_list": [ 00:12:39.239 { 00:12:39.239 "name": "BaseBdev1", 00:12:39.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.239 "is_configured": false, 00:12:39.239 "data_offset": 0, 00:12:39.239 "data_size": 0 00:12:39.239 }, 00:12:39.239 { 00:12:39.239 "name": "BaseBdev2", 00:12:39.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.239 "is_configured": false, 00:12:39.239 "data_offset": 0, 00:12:39.239 "data_size": 0 00:12:39.239 }, 00:12:39.239 { 00:12:39.239 "name": "BaseBdev3", 00:12:39.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.239 "is_configured": false, 00:12:39.239 "data_offset": 0, 00:12:39.239 "data_size": 0 00:12:39.239 } 00:12:39.239 ] 00:12:39.239 }' 00:12:39.239 11:55:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.239 11:55:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.808 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:39.808 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.808 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.808 [2024-12-06 11:55:37.345188] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:39.808 [2024-12-06 11:55:37.345312] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:12:39.808 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.808 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:39.808 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.808 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.808 [2024-12-06 11:55:37.357199] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:39.808 [2024-12-06 11:55:37.357253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:39.808 [2024-12-06 11:55:37.357262] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:39.808 [2024-12-06 11:55:37.357270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:39.808 [2024-12-06 11:55:37.357292] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:39.808 [2024-12-06 11:55:37.357301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:39.808 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.808 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:39.808 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.808 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.808 [2024-12-06 11:55:37.377964] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:39.808 BaseBdev1 00:12:39.808 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.808 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:39.808 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:39.808 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:39.808 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:39.808 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:39.808 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:39.808 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:39.808 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.808 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.808 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.808 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:39.808 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.808 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.809 [ 00:12:39.809 { 00:12:39.809 "name": "BaseBdev1", 00:12:39.809 "aliases": [ 00:12:39.809 "7524460c-7b67-46a6-b949-76595727eea8" 00:12:39.809 ], 00:12:39.809 "product_name": "Malloc disk", 00:12:39.809 "block_size": 512, 00:12:39.809 "num_blocks": 65536, 00:12:39.809 "uuid": "7524460c-7b67-46a6-b949-76595727eea8", 00:12:39.809 "assigned_rate_limits": { 00:12:39.809 "rw_ios_per_sec": 0, 00:12:39.809 "rw_mbytes_per_sec": 0, 00:12:39.809 "r_mbytes_per_sec": 0, 00:12:39.809 "w_mbytes_per_sec": 0 00:12:39.809 }, 00:12:39.809 "claimed": true, 00:12:39.809 "claim_type": "exclusive_write", 00:12:39.809 "zoned": false, 00:12:39.809 "supported_io_types": { 00:12:39.809 "read": true, 00:12:39.809 "write": true, 00:12:39.809 "unmap": true, 00:12:39.809 "flush": true, 00:12:39.809 "reset": true, 00:12:39.809 "nvme_admin": false, 00:12:39.809 "nvme_io": false, 00:12:39.809 "nvme_io_md": false, 00:12:39.809 "write_zeroes": true, 00:12:39.809 "zcopy": true, 00:12:39.809 "get_zone_info": false, 00:12:39.809 "zone_management": false, 00:12:39.809 "zone_append": false, 00:12:39.809 "compare": false, 00:12:39.809 "compare_and_write": false, 00:12:39.809 "abort": true, 00:12:39.809 "seek_hole": false, 00:12:39.809 "seek_data": false, 00:12:39.809 "copy": true, 00:12:39.809 "nvme_iov_md": false 00:12:39.809 }, 00:12:39.809 "memory_domains": [ 00:12:39.809 { 00:12:39.809 "dma_device_id": "system", 00:12:39.809 "dma_device_type": 1 00:12:39.809 }, 00:12:39.809 { 00:12:39.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.809 "dma_device_type": 2 00:12:39.809 } 00:12:39.809 ], 00:12:39.809 "driver_specific": {} 00:12:39.809 } 00:12:39.809 ] 00:12:39.809 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.809 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:39.809 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:39.809 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.809 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.809 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:39.809 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.809 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:39.809 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.809 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.809 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.809 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.809 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.809 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.809 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.809 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.809 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.809 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.809 "name": "Existed_Raid", 00:12:39.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.809 "strip_size_kb": 64, 00:12:39.809 "state": "configuring", 00:12:39.809 "raid_level": "raid5f", 00:12:39.809 "superblock": false, 00:12:39.809 "num_base_bdevs": 3, 00:12:39.809 "num_base_bdevs_discovered": 1, 00:12:39.809 "num_base_bdevs_operational": 3, 00:12:39.809 "base_bdevs_list": [ 00:12:39.809 { 00:12:39.809 "name": "BaseBdev1", 00:12:39.809 "uuid": "7524460c-7b67-46a6-b949-76595727eea8", 00:12:39.809 "is_configured": true, 00:12:39.809 "data_offset": 0, 00:12:39.809 "data_size": 65536 00:12:39.809 }, 00:12:39.809 { 00:12:39.809 "name": "BaseBdev2", 00:12:39.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.809 "is_configured": false, 00:12:39.809 "data_offset": 0, 00:12:39.809 "data_size": 0 00:12:39.809 }, 00:12:39.809 { 00:12:39.809 "name": "BaseBdev3", 00:12:39.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.809 "is_configured": false, 00:12:39.809 "data_offset": 0, 00:12:39.809 "data_size": 0 00:12:39.809 } 00:12:39.809 ] 00:12:39.809 }' 00:12:39.809 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.809 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.377 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:40.378 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.378 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.378 [2024-12-06 11:55:37.905185] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:40.378 [2024-12-06 11:55:37.905228] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:12:40.378 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.378 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:40.378 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.378 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.378 [2024-12-06 11:55:37.917224] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:40.378 [2024-12-06 11:55:37.918971] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:40.378 [2024-12-06 11:55:37.919005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:40.378 [2024-12-06 11:55:37.919014] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:40.378 [2024-12-06 11:55:37.919025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:40.378 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.378 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:40.378 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:40.378 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:40.378 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.378 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.378 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:40.378 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.378 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.378 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.378 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.378 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.378 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.378 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.378 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.378 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.378 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.378 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.378 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.378 "name": "Existed_Raid", 00:12:40.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.378 "strip_size_kb": 64, 00:12:40.378 "state": "configuring", 00:12:40.378 "raid_level": "raid5f", 00:12:40.378 "superblock": false, 00:12:40.378 "num_base_bdevs": 3, 00:12:40.378 "num_base_bdevs_discovered": 1, 00:12:40.378 "num_base_bdevs_operational": 3, 00:12:40.378 "base_bdevs_list": [ 00:12:40.378 { 00:12:40.378 "name": "BaseBdev1", 00:12:40.378 "uuid": "7524460c-7b67-46a6-b949-76595727eea8", 00:12:40.378 "is_configured": true, 00:12:40.378 "data_offset": 0, 00:12:40.378 "data_size": 65536 00:12:40.378 }, 00:12:40.378 { 00:12:40.378 "name": "BaseBdev2", 00:12:40.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.378 "is_configured": false, 00:12:40.378 "data_offset": 0, 00:12:40.378 "data_size": 0 00:12:40.378 }, 00:12:40.378 { 00:12:40.378 "name": "BaseBdev3", 00:12:40.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.378 "is_configured": false, 00:12:40.378 "data_offset": 0, 00:12:40.378 "data_size": 0 00:12:40.378 } 00:12:40.378 ] 00:12:40.378 }' 00:12:40.378 11:55:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.378 11:55:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.947 [2024-12-06 11:55:38.440751] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:40.947 BaseBdev2 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.947 [ 00:12:40.947 { 00:12:40.947 "name": "BaseBdev2", 00:12:40.947 "aliases": [ 00:12:40.947 "85fb73cb-7bff-4ac0-a750-3f1fb8501d79" 00:12:40.947 ], 00:12:40.947 "product_name": "Malloc disk", 00:12:40.947 "block_size": 512, 00:12:40.947 "num_blocks": 65536, 00:12:40.947 "uuid": "85fb73cb-7bff-4ac0-a750-3f1fb8501d79", 00:12:40.947 "assigned_rate_limits": { 00:12:40.947 "rw_ios_per_sec": 0, 00:12:40.947 "rw_mbytes_per_sec": 0, 00:12:40.947 "r_mbytes_per_sec": 0, 00:12:40.947 "w_mbytes_per_sec": 0 00:12:40.947 }, 00:12:40.947 "claimed": true, 00:12:40.947 "claim_type": "exclusive_write", 00:12:40.947 "zoned": false, 00:12:40.947 "supported_io_types": { 00:12:40.947 "read": true, 00:12:40.947 "write": true, 00:12:40.947 "unmap": true, 00:12:40.947 "flush": true, 00:12:40.947 "reset": true, 00:12:40.947 "nvme_admin": false, 00:12:40.947 "nvme_io": false, 00:12:40.947 "nvme_io_md": false, 00:12:40.947 "write_zeroes": true, 00:12:40.947 "zcopy": true, 00:12:40.947 "get_zone_info": false, 00:12:40.947 "zone_management": false, 00:12:40.947 "zone_append": false, 00:12:40.947 "compare": false, 00:12:40.947 "compare_and_write": false, 00:12:40.947 "abort": true, 00:12:40.947 "seek_hole": false, 00:12:40.947 "seek_data": false, 00:12:40.947 "copy": true, 00:12:40.947 "nvme_iov_md": false 00:12:40.947 }, 00:12:40.947 "memory_domains": [ 00:12:40.947 { 00:12:40.947 "dma_device_id": "system", 00:12:40.947 "dma_device_type": 1 00:12:40.947 }, 00:12:40.947 { 00:12:40.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.947 "dma_device_type": 2 00:12:40.947 } 00:12:40.947 ], 00:12:40.947 "driver_specific": {} 00:12:40.947 } 00:12:40.947 ] 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.947 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.947 "name": "Existed_Raid", 00:12:40.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.947 "strip_size_kb": 64, 00:12:40.947 "state": "configuring", 00:12:40.947 "raid_level": "raid5f", 00:12:40.947 "superblock": false, 00:12:40.947 "num_base_bdevs": 3, 00:12:40.947 "num_base_bdevs_discovered": 2, 00:12:40.947 "num_base_bdevs_operational": 3, 00:12:40.947 "base_bdevs_list": [ 00:12:40.947 { 00:12:40.947 "name": "BaseBdev1", 00:12:40.948 "uuid": "7524460c-7b67-46a6-b949-76595727eea8", 00:12:40.948 "is_configured": true, 00:12:40.948 "data_offset": 0, 00:12:40.948 "data_size": 65536 00:12:40.948 }, 00:12:40.948 { 00:12:40.948 "name": "BaseBdev2", 00:12:40.948 "uuid": "85fb73cb-7bff-4ac0-a750-3f1fb8501d79", 00:12:40.948 "is_configured": true, 00:12:40.948 "data_offset": 0, 00:12:40.948 "data_size": 65536 00:12:40.948 }, 00:12:40.948 { 00:12:40.948 "name": "BaseBdev3", 00:12:40.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.948 "is_configured": false, 00:12:40.948 "data_offset": 0, 00:12:40.948 "data_size": 0 00:12:40.948 } 00:12:40.948 ] 00:12:40.948 }' 00:12:40.948 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.948 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.208 [2024-12-06 11:55:38.914996] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:41.208 [2024-12-06 11:55:38.915058] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:12:41.208 [2024-12-06 11:55:38.915070] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:41.208 [2024-12-06 11:55:38.915373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:12:41.208 [2024-12-06 11:55:38.915886] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:12:41.208 [2024-12-06 11:55:38.915907] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:12:41.208 [2024-12-06 11:55:38.916118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.208 BaseBdev3 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.208 [ 00:12:41.208 { 00:12:41.208 "name": "BaseBdev3", 00:12:41.208 "aliases": [ 00:12:41.208 "dfd766ac-2438-4d90-bd28-4708fac11276" 00:12:41.208 ], 00:12:41.208 "product_name": "Malloc disk", 00:12:41.208 "block_size": 512, 00:12:41.208 "num_blocks": 65536, 00:12:41.208 "uuid": "dfd766ac-2438-4d90-bd28-4708fac11276", 00:12:41.208 "assigned_rate_limits": { 00:12:41.208 "rw_ios_per_sec": 0, 00:12:41.208 "rw_mbytes_per_sec": 0, 00:12:41.208 "r_mbytes_per_sec": 0, 00:12:41.208 "w_mbytes_per_sec": 0 00:12:41.208 }, 00:12:41.208 "claimed": true, 00:12:41.208 "claim_type": "exclusive_write", 00:12:41.208 "zoned": false, 00:12:41.208 "supported_io_types": { 00:12:41.208 "read": true, 00:12:41.208 "write": true, 00:12:41.208 "unmap": true, 00:12:41.208 "flush": true, 00:12:41.208 "reset": true, 00:12:41.208 "nvme_admin": false, 00:12:41.208 "nvme_io": false, 00:12:41.208 "nvme_io_md": false, 00:12:41.208 "write_zeroes": true, 00:12:41.208 "zcopy": true, 00:12:41.208 "get_zone_info": false, 00:12:41.208 "zone_management": false, 00:12:41.208 "zone_append": false, 00:12:41.208 "compare": false, 00:12:41.208 "compare_and_write": false, 00:12:41.208 "abort": true, 00:12:41.208 "seek_hole": false, 00:12:41.208 "seek_data": false, 00:12:41.208 "copy": true, 00:12:41.208 "nvme_iov_md": false 00:12:41.208 }, 00:12:41.208 "memory_domains": [ 00:12:41.208 { 00:12:41.208 "dma_device_id": "system", 00:12:41.208 "dma_device_type": 1 00:12:41.208 }, 00:12:41.208 { 00:12:41.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.208 "dma_device_type": 2 00:12:41.208 } 00:12:41.208 ], 00:12:41.208 "driver_specific": {} 00:12:41.208 } 00:12:41.208 ] 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.208 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.469 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.469 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.469 "name": "Existed_Raid", 00:12:41.469 "uuid": "83677215-72e8-422d-813c-8bdc364ca23e", 00:12:41.469 "strip_size_kb": 64, 00:12:41.469 "state": "online", 00:12:41.469 "raid_level": "raid5f", 00:12:41.469 "superblock": false, 00:12:41.469 "num_base_bdevs": 3, 00:12:41.469 "num_base_bdevs_discovered": 3, 00:12:41.469 "num_base_bdevs_operational": 3, 00:12:41.469 "base_bdevs_list": [ 00:12:41.469 { 00:12:41.469 "name": "BaseBdev1", 00:12:41.469 "uuid": "7524460c-7b67-46a6-b949-76595727eea8", 00:12:41.469 "is_configured": true, 00:12:41.469 "data_offset": 0, 00:12:41.469 "data_size": 65536 00:12:41.469 }, 00:12:41.469 { 00:12:41.469 "name": "BaseBdev2", 00:12:41.469 "uuid": "85fb73cb-7bff-4ac0-a750-3f1fb8501d79", 00:12:41.469 "is_configured": true, 00:12:41.469 "data_offset": 0, 00:12:41.469 "data_size": 65536 00:12:41.469 }, 00:12:41.469 { 00:12:41.469 "name": "BaseBdev3", 00:12:41.469 "uuid": "dfd766ac-2438-4d90-bd28-4708fac11276", 00:12:41.469 "is_configured": true, 00:12:41.469 "data_offset": 0, 00:12:41.469 "data_size": 65536 00:12:41.469 } 00:12:41.469 ] 00:12:41.469 }' 00:12:41.469 11:55:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.469 11:55:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.729 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:41.729 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:41.729 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:41.729 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:41.729 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:41.729 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:41.729 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:41.729 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:41.729 11:55:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.729 11:55:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.729 [2024-12-06 11:55:39.390397] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:41.729 11:55:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.729 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:41.729 "name": "Existed_Raid", 00:12:41.729 "aliases": [ 00:12:41.729 "83677215-72e8-422d-813c-8bdc364ca23e" 00:12:41.729 ], 00:12:41.729 "product_name": "Raid Volume", 00:12:41.729 "block_size": 512, 00:12:41.729 "num_blocks": 131072, 00:12:41.729 "uuid": "83677215-72e8-422d-813c-8bdc364ca23e", 00:12:41.729 "assigned_rate_limits": { 00:12:41.729 "rw_ios_per_sec": 0, 00:12:41.729 "rw_mbytes_per_sec": 0, 00:12:41.729 "r_mbytes_per_sec": 0, 00:12:41.729 "w_mbytes_per_sec": 0 00:12:41.729 }, 00:12:41.729 "claimed": false, 00:12:41.729 "zoned": false, 00:12:41.729 "supported_io_types": { 00:12:41.729 "read": true, 00:12:41.729 "write": true, 00:12:41.729 "unmap": false, 00:12:41.729 "flush": false, 00:12:41.729 "reset": true, 00:12:41.729 "nvme_admin": false, 00:12:41.729 "nvme_io": false, 00:12:41.729 "nvme_io_md": false, 00:12:41.729 "write_zeroes": true, 00:12:41.729 "zcopy": false, 00:12:41.729 "get_zone_info": false, 00:12:41.729 "zone_management": false, 00:12:41.729 "zone_append": false, 00:12:41.729 "compare": false, 00:12:41.729 "compare_and_write": false, 00:12:41.729 "abort": false, 00:12:41.729 "seek_hole": false, 00:12:41.729 "seek_data": false, 00:12:41.729 "copy": false, 00:12:41.729 "nvme_iov_md": false 00:12:41.729 }, 00:12:41.729 "driver_specific": { 00:12:41.729 "raid": { 00:12:41.729 "uuid": "83677215-72e8-422d-813c-8bdc364ca23e", 00:12:41.729 "strip_size_kb": 64, 00:12:41.729 "state": "online", 00:12:41.729 "raid_level": "raid5f", 00:12:41.729 "superblock": false, 00:12:41.729 "num_base_bdevs": 3, 00:12:41.729 "num_base_bdevs_discovered": 3, 00:12:41.729 "num_base_bdevs_operational": 3, 00:12:41.729 "base_bdevs_list": [ 00:12:41.729 { 00:12:41.729 "name": "BaseBdev1", 00:12:41.729 "uuid": "7524460c-7b67-46a6-b949-76595727eea8", 00:12:41.729 "is_configured": true, 00:12:41.729 "data_offset": 0, 00:12:41.729 "data_size": 65536 00:12:41.729 }, 00:12:41.729 { 00:12:41.730 "name": "BaseBdev2", 00:12:41.730 "uuid": "85fb73cb-7bff-4ac0-a750-3f1fb8501d79", 00:12:41.730 "is_configured": true, 00:12:41.730 "data_offset": 0, 00:12:41.730 "data_size": 65536 00:12:41.730 }, 00:12:41.730 { 00:12:41.730 "name": "BaseBdev3", 00:12:41.730 "uuid": "dfd766ac-2438-4d90-bd28-4708fac11276", 00:12:41.730 "is_configured": true, 00:12:41.730 "data_offset": 0, 00:12:41.730 "data_size": 65536 00:12:41.730 } 00:12:41.730 ] 00:12:41.730 } 00:12:41.730 } 00:12:41.730 }' 00:12:41.730 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:41.730 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:41.730 BaseBdev2 00:12:41.730 BaseBdev3' 00:12:41.730 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:41.990 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:41.990 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:41.990 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:41.990 11:55:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.990 11:55:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.990 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:41.990 11:55:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.990 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:41.990 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:41.990 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:41.990 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:41.990 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:41.990 11:55:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.990 11:55:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.990 11:55:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.990 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:41.990 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:41.990 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:41.990 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:41.990 11:55:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.990 11:55:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.990 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:41.990 11:55:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.990 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:41.990 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:41.991 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:41.991 11:55:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.991 11:55:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.991 [2024-12-06 11:55:39.665770] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:41.991 11:55:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.991 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:41.991 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:12:41.991 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:41.991 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:41.991 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:41.991 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:12:41.991 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.991 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.991 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:41.991 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:41.991 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:41.991 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.991 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.991 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.991 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.991 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.991 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.991 11:55:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.991 11:55:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.991 11:55:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.991 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.991 "name": "Existed_Raid", 00:12:41.991 "uuid": "83677215-72e8-422d-813c-8bdc364ca23e", 00:12:41.991 "strip_size_kb": 64, 00:12:41.991 "state": "online", 00:12:41.991 "raid_level": "raid5f", 00:12:41.991 "superblock": false, 00:12:41.991 "num_base_bdevs": 3, 00:12:41.991 "num_base_bdevs_discovered": 2, 00:12:41.991 "num_base_bdevs_operational": 2, 00:12:41.991 "base_bdevs_list": [ 00:12:41.991 { 00:12:41.991 "name": null, 00:12:41.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.991 "is_configured": false, 00:12:41.991 "data_offset": 0, 00:12:41.991 "data_size": 65536 00:12:41.991 }, 00:12:41.991 { 00:12:41.991 "name": "BaseBdev2", 00:12:41.991 "uuid": "85fb73cb-7bff-4ac0-a750-3f1fb8501d79", 00:12:41.991 "is_configured": true, 00:12:41.991 "data_offset": 0, 00:12:41.991 "data_size": 65536 00:12:41.991 }, 00:12:41.991 { 00:12:41.991 "name": "BaseBdev3", 00:12:41.991 "uuid": "dfd766ac-2438-4d90-bd28-4708fac11276", 00:12:41.991 "is_configured": true, 00:12:41.991 "data_offset": 0, 00:12:41.991 "data_size": 65536 00:12:41.991 } 00:12:41.991 ] 00:12:41.991 }' 00:12:41.991 11:55:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.991 11:55:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.561 [2024-12-06 11:55:40.132326] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:42.561 [2024-12-06 11:55:40.132476] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:42.561 [2024-12-06 11:55:40.143719] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.561 [2024-12-06 11:55:40.207654] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:42.561 [2024-12-06 11:55:40.207697] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.561 BaseBdev2 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:42.561 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:42.562 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.562 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.562 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.562 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:42.562 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.562 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.562 [ 00:12:42.562 { 00:12:42.562 "name": "BaseBdev2", 00:12:42.562 "aliases": [ 00:12:42.562 "efaea633-9d52-4a22-8eae-3011826661e9" 00:12:42.562 ], 00:12:42.562 "product_name": "Malloc disk", 00:12:42.562 "block_size": 512, 00:12:42.562 "num_blocks": 65536, 00:12:42.562 "uuid": "efaea633-9d52-4a22-8eae-3011826661e9", 00:12:42.562 "assigned_rate_limits": { 00:12:42.562 "rw_ios_per_sec": 0, 00:12:42.562 "rw_mbytes_per_sec": 0, 00:12:42.562 "r_mbytes_per_sec": 0, 00:12:42.562 "w_mbytes_per_sec": 0 00:12:42.562 }, 00:12:42.562 "claimed": false, 00:12:42.562 "zoned": false, 00:12:42.562 "supported_io_types": { 00:12:42.562 "read": true, 00:12:42.562 "write": true, 00:12:42.562 "unmap": true, 00:12:42.562 "flush": true, 00:12:42.562 "reset": true, 00:12:42.562 "nvme_admin": false, 00:12:42.562 "nvme_io": false, 00:12:42.562 "nvme_io_md": false, 00:12:42.562 "write_zeroes": true, 00:12:42.822 "zcopy": true, 00:12:42.822 "get_zone_info": false, 00:12:42.822 "zone_management": false, 00:12:42.822 "zone_append": false, 00:12:42.822 "compare": false, 00:12:42.822 "compare_and_write": false, 00:12:42.822 "abort": true, 00:12:42.822 "seek_hole": false, 00:12:42.822 "seek_data": false, 00:12:42.822 "copy": true, 00:12:42.822 "nvme_iov_md": false 00:12:42.822 }, 00:12:42.822 "memory_domains": [ 00:12:42.822 { 00:12:42.822 "dma_device_id": "system", 00:12:42.822 "dma_device_type": 1 00:12:42.822 }, 00:12:42.822 { 00:12:42.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.822 "dma_device_type": 2 00:12:42.822 } 00:12:42.822 ], 00:12:42.822 "driver_specific": {} 00:12:42.822 } 00:12:42.822 ] 00:12:42.822 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.822 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:42.822 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:42.822 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:42.822 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:42.822 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.822 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.822 BaseBdev3 00:12:42.822 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.822 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:42.822 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.823 [ 00:12:42.823 { 00:12:42.823 "name": "BaseBdev3", 00:12:42.823 "aliases": [ 00:12:42.823 "05c679eb-5cbf-417d-8301-c739b3283afc" 00:12:42.823 ], 00:12:42.823 "product_name": "Malloc disk", 00:12:42.823 "block_size": 512, 00:12:42.823 "num_blocks": 65536, 00:12:42.823 "uuid": "05c679eb-5cbf-417d-8301-c739b3283afc", 00:12:42.823 "assigned_rate_limits": { 00:12:42.823 "rw_ios_per_sec": 0, 00:12:42.823 "rw_mbytes_per_sec": 0, 00:12:42.823 "r_mbytes_per_sec": 0, 00:12:42.823 "w_mbytes_per_sec": 0 00:12:42.823 }, 00:12:42.823 "claimed": false, 00:12:42.823 "zoned": false, 00:12:42.823 "supported_io_types": { 00:12:42.823 "read": true, 00:12:42.823 "write": true, 00:12:42.823 "unmap": true, 00:12:42.823 "flush": true, 00:12:42.823 "reset": true, 00:12:42.823 "nvme_admin": false, 00:12:42.823 "nvme_io": false, 00:12:42.823 "nvme_io_md": false, 00:12:42.823 "write_zeroes": true, 00:12:42.823 "zcopy": true, 00:12:42.823 "get_zone_info": false, 00:12:42.823 "zone_management": false, 00:12:42.823 "zone_append": false, 00:12:42.823 "compare": false, 00:12:42.823 "compare_and_write": false, 00:12:42.823 "abort": true, 00:12:42.823 "seek_hole": false, 00:12:42.823 "seek_data": false, 00:12:42.823 "copy": true, 00:12:42.823 "nvme_iov_md": false 00:12:42.823 }, 00:12:42.823 "memory_domains": [ 00:12:42.823 { 00:12:42.823 "dma_device_id": "system", 00:12:42.823 "dma_device_type": 1 00:12:42.823 }, 00:12:42.823 { 00:12:42.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.823 "dma_device_type": 2 00:12:42.823 } 00:12:42.823 ], 00:12:42.823 "driver_specific": {} 00:12:42.823 } 00:12:42.823 ] 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.823 [2024-12-06 11:55:40.382163] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:42.823 [2024-12-06 11:55:40.382292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:42.823 [2024-12-06 11:55:40.382331] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:42.823 [2024-12-06 11:55:40.384105] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.823 "name": "Existed_Raid", 00:12:42.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.823 "strip_size_kb": 64, 00:12:42.823 "state": "configuring", 00:12:42.823 "raid_level": "raid5f", 00:12:42.823 "superblock": false, 00:12:42.823 "num_base_bdevs": 3, 00:12:42.823 "num_base_bdevs_discovered": 2, 00:12:42.823 "num_base_bdevs_operational": 3, 00:12:42.823 "base_bdevs_list": [ 00:12:42.823 { 00:12:42.823 "name": "BaseBdev1", 00:12:42.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.823 "is_configured": false, 00:12:42.823 "data_offset": 0, 00:12:42.823 "data_size": 0 00:12:42.823 }, 00:12:42.823 { 00:12:42.823 "name": "BaseBdev2", 00:12:42.823 "uuid": "efaea633-9d52-4a22-8eae-3011826661e9", 00:12:42.823 "is_configured": true, 00:12:42.823 "data_offset": 0, 00:12:42.823 "data_size": 65536 00:12:42.823 }, 00:12:42.823 { 00:12:42.823 "name": "BaseBdev3", 00:12:42.823 "uuid": "05c679eb-5cbf-417d-8301-c739b3283afc", 00:12:42.823 "is_configured": true, 00:12:42.823 "data_offset": 0, 00:12:42.823 "data_size": 65536 00:12:42.823 } 00:12:42.823 ] 00:12:42.823 }' 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.823 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.394 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:43.394 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.394 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.394 [2024-12-06 11:55:40.857363] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:43.394 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.394 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:43.394 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.394 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.394 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:43.394 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.394 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:43.394 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.394 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.394 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.394 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.394 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.394 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.394 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.394 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.394 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.394 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.394 "name": "Existed_Raid", 00:12:43.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.394 "strip_size_kb": 64, 00:12:43.394 "state": "configuring", 00:12:43.394 "raid_level": "raid5f", 00:12:43.394 "superblock": false, 00:12:43.394 "num_base_bdevs": 3, 00:12:43.394 "num_base_bdevs_discovered": 1, 00:12:43.394 "num_base_bdevs_operational": 3, 00:12:43.394 "base_bdevs_list": [ 00:12:43.394 { 00:12:43.394 "name": "BaseBdev1", 00:12:43.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.394 "is_configured": false, 00:12:43.394 "data_offset": 0, 00:12:43.394 "data_size": 0 00:12:43.394 }, 00:12:43.394 { 00:12:43.394 "name": null, 00:12:43.394 "uuid": "efaea633-9d52-4a22-8eae-3011826661e9", 00:12:43.394 "is_configured": false, 00:12:43.394 "data_offset": 0, 00:12:43.394 "data_size": 65536 00:12:43.394 }, 00:12:43.394 { 00:12:43.394 "name": "BaseBdev3", 00:12:43.394 "uuid": "05c679eb-5cbf-417d-8301-c739b3283afc", 00:12:43.394 "is_configured": true, 00:12:43.394 "data_offset": 0, 00:12:43.394 "data_size": 65536 00:12:43.394 } 00:12:43.394 ] 00:12:43.394 }' 00:12:43.394 11:55:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.394 11:55:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.654 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.654 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:43.654 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.654 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.654 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.654 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:43.654 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:43.654 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.654 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.654 [2024-12-06 11:55:41.327593] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:43.654 BaseBdev1 00:12:43.654 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.654 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:43.654 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:43.654 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:43.654 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:43.654 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:43.654 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:43.655 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:43.655 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.655 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.655 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.655 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:43.655 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.655 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.655 [ 00:12:43.655 { 00:12:43.655 "name": "BaseBdev1", 00:12:43.655 "aliases": [ 00:12:43.655 "952d7c36-52be-4398-9a5f-2ff4e103485b" 00:12:43.655 ], 00:12:43.655 "product_name": "Malloc disk", 00:12:43.655 "block_size": 512, 00:12:43.655 "num_blocks": 65536, 00:12:43.655 "uuid": "952d7c36-52be-4398-9a5f-2ff4e103485b", 00:12:43.655 "assigned_rate_limits": { 00:12:43.655 "rw_ios_per_sec": 0, 00:12:43.655 "rw_mbytes_per_sec": 0, 00:12:43.655 "r_mbytes_per_sec": 0, 00:12:43.655 "w_mbytes_per_sec": 0 00:12:43.655 }, 00:12:43.655 "claimed": true, 00:12:43.655 "claim_type": "exclusive_write", 00:12:43.655 "zoned": false, 00:12:43.655 "supported_io_types": { 00:12:43.655 "read": true, 00:12:43.655 "write": true, 00:12:43.655 "unmap": true, 00:12:43.655 "flush": true, 00:12:43.655 "reset": true, 00:12:43.655 "nvme_admin": false, 00:12:43.655 "nvme_io": false, 00:12:43.655 "nvme_io_md": false, 00:12:43.655 "write_zeroes": true, 00:12:43.655 "zcopy": true, 00:12:43.655 "get_zone_info": false, 00:12:43.655 "zone_management": false, 00:12:43.655 "zone_append": false, 00:12:43.655 "compare": false, 00:12:43.655 "compare_and_write": false, 00:12:43.655 "abort": true, 00:12:43.655 "seek_hole": false, 00:12:43.655 "seek_data": false, 00:12:43.655 "copy": true, 00:12:43.655 "nvme_iov_md": false 00:12:43.655 }, 00:12:43.655 "memory_domains": [ 00:12:43.655 { 00:12:43.655 "dma_device_id": "system", 00:12:43.655 "dma_device_type": 1 00:12:43.655 }, 00:12:43.655 { 00:12:43.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.655 "dma_device_type": 2 00:12:43.655 } 00:12:43.655 ], 00:12:43.655 "driver_specific": {} 00:12:43.655 } 00:12:43.655 ] 00:12:43.655 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.655 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:43.655 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:43.655 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.655 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.655 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:43.655 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.655 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:43.655 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.655 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.655 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.655 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.655 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.655 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.655 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.655 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.655 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.916 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.916 "name": "Existed_Raid", 00:12:43.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.916 "strip_size_kb": 64, 00:12:43.916 "state": "configuring", 00:12:43.916 "raid_level": "raid5f", 00:12:43.916 "superblock": false, 00:12:43.916 "num_base_bdevs": 3, 00:12:43.916 "num_base_bdevs_discovered": 2, 00:12:43.916 "num_base_bdevs_operational": 3, 00:12:43.916 "base_bdevs_list": [ 00:12:43.916 { 00:12:43.916 "name": "BaseBdev1", 00:12:43.916 "uuid": "952d7c36-52be-4398-9a5f-2ff4e103485b", 00:12:43.916 "is_configured": true, 00:12:43.916 "data_offset": 0, 00:12:43.916 "data_size": 65536 00:12:43.916 }, 00:12:43.916 { 00:12:43.916 "name": null, 00:12:43.916 "uuid": "efaea633-9d52-4a22-8eae-3011826661e9", 00:12:43.916 "is_configured": false, 00:12:43.916 "data_offset": 0, 00:12:43.916 "data_size": 65536 00:12:43.916 }, 00:12:43.916 { 00:12:43.916 "name": "BaseBdev3", 00:12:43.916 "uuid": "05c679eb-5cbf-417d-8301-c739b3283afc", 00:12:43.916 "is_configured": true, 00:12:43.916 "data_offset": 0, 00:12:43.916 "data_size": 65536 00:12:43.916 } 00:12:43.916 ] 00:12:43.916 }' 00:12:43.916 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.916 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.176 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.176 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:44.176 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.176 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.176 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.176 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:44.176 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:44.176 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.176 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.176 [2024-12-06 11:55:41.835042] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:44.176 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.176 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:44.176 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.176 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.176 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:44.176 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.176 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:44.176 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.176 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.176 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.176 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.176 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.176 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.176 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.176 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.176 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.176 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.176 "name": "Existed_Raid", 00:12:44.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.176 "strip_size_kb": 64, 00:12:44.176 "state": "configuring", 00:12:44.176 "raid_level": "raid5f", 00:12:44.176 "superblock": false, 00:12:44.176 "num_base_bdevs": 3, 00:12:44.176 "num_base_bdevs_discovered": 1, 00:12:44.176 "num_base_bdevs_operational": 3, 00:12:44.176 "base_bdevs_list": [ 00:12:44.176 { 00:12:44.176 "name": "BaseBdev1", 00:12:44.176 "uuid": "952d7c36-52be-4398-9a5f-2ff4e103485b", 00:12:44.176 "is_configured": true, 00:12:44.176 "data_offset": 0, 00:12:44.176 "data_size": 65536 00:12:44.176 }, 00:12:44.176 { 00:12:44.176 "name": null, 00:12:44.176 "uuid": "efaea633-9d52-4a22-8eae-3011826661e9", 00:12:44.176 "is_configured": false, 00:12:44.176 "data_offset": 0, 00:12:44.176 "data_size": 65536 00:12:44.176 }, 00:12:44.176 { 00:12:44.176 "name": null, 00:12:44.176 "uuid": "05c679eb-5cbf-417d-8301-c739b3283afc", 00:12:44.176 "is_configured": false, 00:12:44.176 "data_offset": 0, 00:12:44.176 "data_size": 65536 00:12:44.176 } 00:12:44.176 ] 00:12:44.176 }' 00:12:44.176 11:55:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.176 11:55:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.746 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.746 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:44.746 11:55:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.746 11:55:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.746 11:55:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.746 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:44.746 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:44.746 11:55:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.746 11:55:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.746 [2024-12-06 11:55:42.310223] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:44.746 11:55:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.746 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:44.746 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.746 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.746 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:44.746 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.746 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:44.746 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.746 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.746 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.746 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.746 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.746 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.746 11:55:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.746 11:55:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.746 11:55:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.746 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.746 "name": "Existed_Raid", 00:12:44.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.746 "strip_size_kb": 64, 00:12:44.746 "state": "configuring", 00:12:44.746 "raid_level": "raid5f", 00:12:44.746 "superblock": false, 00:12:44.746 "num_base_bdevs": 3, 00:12:44.746 "num_base_bdevs_discovered": 2, 00:12:44.746 "num_base_bdevs_operational": 3, 00:12:44.746 "base_bdevs_list": [ 00:12:44.746 { 00:12:44.746 "name": "BaseBdev1", 00:12:44.746 "uuid": "952d7c36-52be-4398-9a5f-2ff4e103485b", 00:12:44.746 "is_configured": true, 00:12:44.746 "data_offset": 0, 00:12:44.746 "data_size": 65536 00:12:44.746 }, 00:12:44.746 { 00:12:44.746 "name": null, 00:12:44.746 "uuid": "efaea633-9d52-4a22-8eae-3011826661e9", 00:12:44.746 "is_configured": false, 00:12:44.746 "data_offset": 0, 00:12:44.746 "data_size": 65536 00:12:44.746 }, 00:12:44.746 { 00:12:44.746 "name": "BaseBdev3", 00:12:44.746 "uuid": "05c679eb-5cbf-417d-8301-c739b3283afc", 00:12:44.747 "is_configured": true, 00:12:44.747 "data_offset": 0, 00:12:44.747 "data_size": 65536 00:12:44.747 } 00:12:44.747 ] 00:12:44.747 }' 00:12:44.747 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.747 11:55:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.007 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.007 11:55:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.007 11:55:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.007 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:45.007 11:55:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.267 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:45.267 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:45.267 11:55:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.267 11:55:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.267 [2024-12-06 11:55:42.797403] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:45.267 11:55:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.267 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:45.267 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.267 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:45.267 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:45.267 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.267 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:45.267 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.267 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.267 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.267 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.267 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.267 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.267 11:55:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.267 11:55:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.267 11:55:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.267 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.267 "name": "Existed_Raid", 00:12:45.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.267 "strip_size_kb": 64, 00:12:45.267 "state": "configuring", 00:12:45.267 "raid_level": "raid5f", 00:12:45.267 "superblock": false, 00:12:45.267 "num_base_bdevs": 3, 00:12:45.267 "num_base_bdevs_discovered": 1, 00:12:45.267 "num_base_bdevs_operational": 3, 00:12:45.267 "base_bdevs_list": [ 00:12:45.267 { 00:12:45.267 "name": null, 00:12:45.267 "uuid": "952d7c36-52be-4398-9a5f-2ff4e103485b", 00:12:45.267 "is_configured": false, 00:12:45.267 "data_offset": 0, 00:12:45.267 "data_size": 65536 00:12:45.267 }, 00:12:45.267 { 00:12:45.267 "name": null, 00:12:45.267 "uuid": "efaea633-9d52-4a22-8eae-3011826661e9", 00:12:45.267 "is_configured": false, 00:12:45.267 "data_offset": 0, 00:12:45.267 "data_size": 65536 00:12:45.267 }, 00:12:45.267 { 00:12:45.267 "name": "BaseBdev3", 00:12:45.267 "uuid": "05c679eb-5cbf-417d-8301-c739b3283afc", 00:12:45.267 "is_configured": true, 00:12:45.267 "data_offset": 0, 00:12:45.267 "data_size": 65536 00:12:45.267 } 00:12:45.267 ] 00:12:45.267 }' 00:12:45.267 11:55:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.267 11:55:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.527 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.527 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.527 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.527 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:45.527 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.527 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:45.527 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:45.527 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.527 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.527 [2024-12-06 11:55:43.271183] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:45.527 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.527 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:45.527 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.527 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:45.527 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:45.527 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.527 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:45.527 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.527 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.527 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.527 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.527 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.786 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.786 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.786 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.786 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.786 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.786 "name": "Existed_Raid", 00:12:45.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.786 "strip_size_kb": 64, 00:12:45.786 "state": "configuring", 00:12:45.786 "raid_level": "raid5f", 00:12:45.786 "superblock": false, 00:12:45.786 "num_base_bdevs": 3, 00:12:45.786 "num_base_bdevs_discovered": 2, 00:12:45.786 "num_base_bdevs_operational": 3, 00:12:45.786 "base_bdevs_list": [ 00:12:45.786 { 00:12:45.786 "name": null, 00:12:45.786 "uuid": "952d7c36-52be-4398-9a5f-2ff4e103485b", 00:12:45.786 "is_configured": false, 00:12:45.786 "data_offset": 0, 00:12:45.786 "data_size": 65536 00:12:45.786 }, 00:12:45.786 { 00:12:45.786 "name": "BaseBdev2", 00:12:45.786 "uuid": "efaea633-9d52-4a22-8eae-3011826661e9", 00:12:45.786 "is_configured": true, 00:12:45.786 "data_offset": 0, 00:12:45.786 "data_size": 65536 00:12:45.786 }, 00:12:45.786 { 00:12:45.786 "name": "BaseBdev3", 00:12:45.786 "uuid": "05c679eb-5cbf-417d-8301-c739b3283afc", 00:12:45.786 "is_configured": true, 00:12:45.786 "data_offset": 0, 00:12:45.786 "data_size": 65536 00:12:45.786 } 00:12:45.786 ] 00:12:45.786 }' 00:12:45.786 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.786 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.046 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.046 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:46.046 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.046 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.046 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.046 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:46.046 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.046 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.046 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.046 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:46.046 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.046 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 952d7c36-52be-4398-9a5f-2ff4e103485b 00:12:46.046 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.046 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.306 [2024-12-06 11:55:43.805064] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:46.306 [2024-12-06 11:55:43.805181] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:12:46.306 [2024-12-06 11:55:43.805207] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:46.306 [2024-12-06 11:55:43.805485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:12:46.306 [2024-12-06 11:55:43.805917] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:12:46.306 [2024-12-06 11:55:43.805967] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:12:46.306 [2024-12-06 11:55:43.806176] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.306 NewBaseBdev 00:12:46.306 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.306 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:46.306 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:46.306 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:46.306 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:46.306 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:46.306 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:46.306 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:46.306 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.306 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.306 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.306 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:46.306 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.306 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.306 [ 00:12:46.306 { 00:12:46.306 "name": "NewBaseBdev", 00:12:46.306 "aliases": [ 00:12:46.306 "952d7c36-52be-4398-9a5f-2ff4e103485b" 00:12:46.306 ], 00:12:46.306 "product_name": "Malloc disk", 00:12:46.306 "block_size": 512, 00:12:46.306 "num_blocks": 65536, 00:12:46.306 "uuid": "952d7c36-52be-4398-9a5f-2ff4e103485b", 00:12:46.306 "assigned_rate_limits": { 00:12:46.306 "rw_ios_per_sec": 0, 00:12:46.306 "rw_mbytes_per_sec": 0, 00:12:46.306 "r_mbytes_per_sec": 0, 00:12:46.306 "w_mbytes_per_sec": 0 00:12:46.306 }, 00:12:46.306 "claimed": true, 00:12:46.306 "claim_type": "exclusive_write", 00:12:46.306 "zoned": false, 00:12:46.306 "supported_io_types": { 00:12:46.306 "read": true, 00:12:46.306 "write": true, 00:12:46.306 "unmap": true, 00:12:46.306 "flush": true, 00:12:46.306 "reset": true, 00:12:46.306 "nvme_admin": false, 00:12:46.306 "nvme_io": false, 00:12:46.306 "nvme_io_md": false, 00:12:46.306 "write_zeroes": true, 00:12:46.306 "zcopy": true, 00:12:46.306 "get_zone_info": false, 00:12:46.306 "zone_management": false, 00:12:46.306 "zone_append": false, 00:12:46.306 "compare": false, 00:12:46.306 "compare_and_write": false, 00:12:46.306 "abort": true, 00:12:46.306 "seek_hole": false, 00:12:46.306 "seek_data": false, 00:12:46.306 "copy": true, 00:12:46.306 "nvme_iov_md": false 00:12:46.306 }, 00:12:46.306 "memory_domains": [ 00:12:46.306 { 00:12:46.306 "dma_device_id": "system", 00:12:46.306 "dma_device_type": 1 00:12:46.306 }, 00:12:46.306 { 00:12:46.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.306 "dma_device_type": 2 00:12:46.306 } 00:12:46.306 ], 00:12:46.306 "driver_specific": {} 00:12:46.306 } 00:12:46.306 ] 00:12:46.306 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.306 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:46.306 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:46.306 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.306 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.306 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:46.306 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.306 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:46.306 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.306 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.306 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.306 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.306 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.307 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.307 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.307 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.307 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.307 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.307 "name": "Existed_Raid", 00:12:46.307 "uuid": "db24d37a-1aa4-49b6-99a6-8d8b74753d60", 00:12:46.307 "strip_size_kb": 64, 00:12:46.307 "state": "online", 00:12:46.307 "raid_level": "raid5f", 00:12:46.307 "superblock": false, 00:12:46.307 "num_base_bdevs": 3, 00:12:46.307 "num_base_bdevs_discovered": 3, 00:12:46.307 "num_base_bdevs_operational": 3, 00:12:46.307 "base_bdevs_list": [ 00:12:46.307 { 00:12:46.307 "name": "NewBaseBdev", 00:12:46.307 "uuid": "952d7c36-52be-4398-9a5f-2ff4e103485b", 00:12:46.307 "is_configured": true, 00:12:46.307 "data_offset": 0, 00:12:46.307 "data_size": 65536 00:12:46.307 }, 00:12:46.307 { 00:12:46.307 "name": "BaseBdev2", 00:12:46.307 "uuid": "efaea633-9d52-4a22-8eae-3011826661e9", 00:12:46.307 "is_configured": true, 00:12:46.307 "data_offset": 0, 00:12:46.307 "data_size": 65536 00:12:46.307 }, 00:12:46.307 { 00:12:46.307 "name": "BaseBdev3", 00:12:46.307 "uuid": "05c679eb-5cbf-417d-8301-c739b3283afc", 00:12:46.307 "is_configured": true, 00:12:46.307 "data_offset": 0, 00:12:46.307 "data_size": 65536 00:12:46.307 } 00:12:46.307 ] 00:12:46.307 }' 00:12:46.307 11:55:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.307 11:55:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.566 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:46.566 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:46.566 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:46.566 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:46.566 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:46.566 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:46.566 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:46.566 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:46.566 11:55:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.566 11:55:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.566 [2024-12-06 11:55:44.296527] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:46.566 11:55:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.827 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:46.827 "name": "Existed_Raid", 00:12:46.827 "aliases": [ 00:12:46.827 "db24d37a-1aa4-49b6-99a6-8d8b74753d60" 00:12:46.827 ], 00:12:46.827 "product_name": "Raid Volume", 00:12:46.827 "block_size": 512, 00:12:46.827 "num_blocks": 131072, 00:12:46.827 "uuid": "db24d37a-1aa4-49b6-99a6-8d8b74753d60", 00:12:46.827 "assigned_rate_limits": { 00:12:46.827 "rw_ios_per_sec": 0, 00:12:46.827 "rw_mbytes_per_sec": 0, 00:12:46.827 "r_mbytes_per_sec": 0, 00:12:46.827 "w_mbytes_per_sec": 0 00:12:46.827 }, 00:12:46.827 "claimed": false, 00:12:46.827 "zoned": false, 00:12:46.827 "supported_io_types": { 00:12:46.827 "read": true, 00:12:46.827 "write": true, 00:12:46.827 "unmap": false, 00:12:46.827 "flush": false, 00:12:46.827 "reset": true, 00:12:46.827 "nvme_admin": false, 00:12:46.827 "nvme_io": false, 00:12:46.827 "nvme_io_md": false, 00:12:46.827 "write_zeroes": true, 00:12:46.827 "zcopy": false, 00:12:46.827 "get_zone_info": false, 00:12:46.827 "zone_management": false, 00:12:46.827 "zone_append": false, 00:12:46.827 "compare": false, 00:12:46.827 "compare_and_write": false, 00:12:46.827 "abort": false, 00:12:46.827 "seek_hole": false, 00:12:46.827 "seek_data": false, 00:12:46.828 "copy": false, 00:12:46.828 "nvme_iov_md": false 00:12:46.828 }, 00:12:46.828 "driver_specific": { 00:12:46.828 "raid": { 00:12:46.828 "uuid": "db24d37a-1aa4-49b6-99a6-8d8b74753d60", 00:12:46.828 "strip_size_kb": 64, 00:12:46.828 "state": "online", 00:12:46.828 "raid_level": "raid5f", 00:12:46.828 "superblock": false, 00:12:46.828 "num_base_bdevs": 3, 00:12:46.828 "num_base_bdevs_discovered": 3, 00:12:46.828 "num_base_bdevs_operational": 3, 00:12:46.828 "base_bdevs_list": [ 00:12:46.828 { 00:12:46.828 "name": "NewBaseBdev", 00:12:46.828 "uuid": "952d7c36-52be-4398-9a5f-2ff4e103485b", 00:12:46.828 "is_configured": true, 00:12:46.828 "data_offset": 0, 00:12:46.828 "data_size": 65536 00:12:46.828 }, 00:12:46.828 { 00:12:46.828 "name": "BaseBdev2", 00:12:46.828 "uuid": "efaea633-9d52-4a22-8eae-3011826661e9", 00:12:46.828 "is_configured": true, 00:12:46.828 "data_offset": 0, 00:12:46.828 "data_size": 65536 00:12:46.828 }, 00:12:46.828 { 00:12:46.828 "name": "BaseBdev3", 00:12:46.828 "uuid": "05c679eb-5cbf-417d-8301-c739b3283afc", 00:12:46.828 "is_configured": true, 00:12:46.828 "data_offset": 0, 00:12:46.828 "data_size": 65536 00:12:46.828 } 00:12:46.828 ] 00:12:46.828 } 00:12:46.828 } 00:12:46.828 }' 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:46.828 BaseBdev2 00:12:46.828 BaseBdev3' 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.828 [2024-12-06 11:55:44.555852] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:46.828 [2024-12-06 11:55:44.555924] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:46.828 [2024-12-06 11:55:44.556005] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:46.828 [2024-12-06 11:55:44.556242] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:46.828 [2024-12-06 11:55:44.556275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 90048 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 90048 ']' 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 90048 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:46.828 11:55:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90048 00:12:47.087 11:55:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:47.087 killing process with pid 90048 00:12:47.087 11:55:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:47.087 11:55:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90048' 00:12:47.087 11:55:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 90048 00:12:47.087 [2024-12-06 11:55:44.594661] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:47.087 11:55:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 90048 00:12:47.087 [2024-12-06 11:55:44.626284] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:47.347 00:12:47.347 real 0m8.884s 00:12:47.347 user 0m15.116s 00:12:47.347 sys 0m1.885s 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.347 ************************************ 00:12:47.347 END TEST raid5f_state_function_test 00:12:47.347 ************************************ 00:12:47.347 11:55:44 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:12:47.347 11:55:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:47.347 11:55:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:47.347 11:55:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:47.347 ************************************ 00:12:47.347 START TEST raid5f_state_function_test_sb 00:12:47.347 ************************************ 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:47.347 Process raid pid: 90653 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=90653 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90653' 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 90653 00:12:47.347 11:55:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 90653 ']' 00:12:47.348 11:55:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.348 11:55:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:47.348 11:55:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.348 11:55:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:47.348 11:55:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.348 [2024-12-06 11:55:45.056313] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:47.348 [2024-12-06 11:55:45.056521] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.606 [2024-12-06 11:55:45.204095] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.606 [2024-12-06 11:55:45.250685] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.606 [2024-12-06 11:55:45.293973] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:47.606 [2024-12-06 11:55:45.294007] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:48.175 11:55:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:48.175 11:55:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:48.175 11:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:48.175 11:55:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.175 11:55:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.175 [2024-12-06 11:55:45.867989] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:48.175 [2024-12-06 11:55:45.868038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:48.175 [2024-12-06 11:55:45.868049] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:48.175 [2024-12-06 11:55:45.868058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:48.175 [2024-12-06 11:55:45.868063] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:48.175 [2024-12-06 11:55:45.868076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:48.175 11:55:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.175 11:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:48.175 11:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.175 11:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.175 11:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:48.175 11:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.175 11:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.175 11:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.175 11:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.175 11:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.175 11:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.175 11:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.175 11:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.175 11:55:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.175 11:55:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.175 11:55:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.175 11:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.175 "name": "Existed_Raid", 00:12:48.175 "uuid": "75f3c633-ea7b-4d8e-8cac-efde5a60235a", 00:12:48.175 "strip_size_kb": 64, 00:12:48.175 "state": "configuring", 00:12:48.175 "raid_level": "raid5f", 00:12:48.175 "superblock": true, 00:12:48.175 "num_base_bdevs": 3, 00:12:48.175 "num_base_bdevs_discovered": 0, 00:12:48.175 "num_base_bdevs_operational": 3, 00:12:48.175 "base_bdevs_list": [ 00:12:48.175 { 00:12:48.175 "name": "BaseBdev1", 00:12:48.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.175 "is_configured": false, 00:12:48.175 "data_offset": 0, 00:12:48.175 "data_size": 0 00:12:48.175 }, 00:12:48.175 { 00:12:48.175 "name": "BaseBdev2", 00:12:48.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.175 "is_configured": false, 00:12:48.175 "data_offset": 0, 00:12:48.175 "data_size": 0 00:12:48.175 }, 00:12:48.175 { 00:12:48.175 "name": "BaseBdev3", 00:12:48.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.175 "is_configured": false, 00:12:48.175 "data_offset": 0, 00:12:48.175 "data_size": 0 00:12:48.175 } 00:12:48.175 ] 00:12:48.175 }' 00:12:48.175 11:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.175 11:55:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.743 [2024-12-06 11:55:46.319090] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:48.743 [2024-12-06 11:55:46.319171] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.743 [2024-12-06 11:55:46.331101] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:48.743 [2024-12-06 11:55:46.331181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:48.743 [2024-12-06 11:55:46.331207] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:48.743 [2024-12-06 11:55:46.331228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:48.743 [2024-12-06 11:55:46.331254] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:48.743 [2024-12-06 11:55:46.331273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.743 [2024-12-06 11:55:46.352155] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.743 BaseBdev1 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.743 [ 00:12:48.743 { 00:12:48.743 "name": "BaseBdev1", 00:12:48.743 "aliases": [ 00:12:48.743 "41c96f58-df8b-4b12-819c-d6558b4fcf58" 00:12:48.743 ], 00:12:48.743 "product_name": "Malloc disk", 00:12:48.743 "block_size": 512, 00:12:48.743 "num_blocks": 65536, 00:12:48.743 "uuid": "41c96f58-df8b-4b12-819c-d6558b4fcf58", 00:12:48.743 "assigned_rate_limits": { 00:12:48.743 "rw_ios_per_sec": 0, 00:12:48.743 "rw_mbytes_per_sec": 0, 00:12:48.743 "r_mbytes_per_sec": 0, 00:12:48.743 "w_mbytes_per_sec": 0 00:12:48.743 }, 00:12:48.743 "claimed": true, 00:12:48.743 "claim_type": "exclusive_write", 00:12:48.743 "zoned": false, 00:12:48.743 "supported_io_types": { 00:12:48.743 "read": true, 00:12:48.743 "write": true, 00:12:48.743 "unmap": true, 00:12:48.743 "flush": true, 00:12:48.743 "reset": true, 00:12:48.743 "nvme_admin": false, 00:12:48.743 "nvme_io": false, 00:12:48.743 "nvme_io_md": false, 00:12:48.743 "write_zeroes": true, 00:12:48.743 "zcopy": true, 00:12:48.743 "get_zone_info": false, 00:12:48.743 "zone_management": false, 00:12:48.743 "zone_append": false, 00:12:48.743 "compare": false, 00:12:48.743 "compare_and_write": false, 00:12:48.743 "abort": true, 00:12:48.743 "seek_hole": false, 00:12:48.743 "seek_data": false, 00:12:48.743 "copy": true, 00:12:48.743 "nvme_iov_md": false 00:12:48.743 }, 00:12:48.743 "memory_domains": [ 00:12:48.743 { 00:12:48.743 "dma_device_id": "system", 00:12:48.743 "dma_device_type": 1 00:12:48.743 }, 00:12:48.743 { 00:12:48.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.743 "dma_device_type": 2 00:12:48.743 } 00:12:48.743 ], 00:12:48.743 "driver_specific": {} 00:12:48.743 } 00:12:48.743 ] 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.743 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.743 "name": "Existed_Raid", 00:12:48.743 "uuid": "f5866eb8-fac2-4c3b-88ea-2a6db71c5fef", 00:12:48.743 "strip_size_kb": 64, 00:12:48.743 "state": "configuring", 00:12:48.743 "raid_level": "raid5f", 00:12:48.743 "superblock": true, 00:12:48.743 "num_base_bdevs": 3, 00:12:48.743 "num_base_bdevs_discovered": 1, 00:12:48.743 "num_base_bdevs_operational": 3, 00:12:48.743 "base_bdevs_list": [ 00:12:48.743 { 00:12:48.743 "name": "BaseBdev1", 00:12:48.744 "uuid": "41c96f58-df8b-4b12-819c-d6558b4fcf58", 00:12:48.744 "is_configured": true, 00:12:48.744 "data_offset": 2048, 00:12:48.744 "data_size": 63488 00:12:48.744 }, 00:12:48.744 { 00:12:48.744 "name": "BaseBdev2", 00:12:48.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.744 "is_configured": false, 00:12:48.744 "data_offset": 0, 00:12:48.744 "data_size": 0 00:12:48.744 }, 00:12:48.744 { 00:12:48.744 "name": "BaseBdev3", 00:12:48.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.744 "is_configured": false, 00:12:48.744 "data_offset": 0, 00:12:48.744 "data_size": 0 00:12:48.744 } 00:12:48.744 ] 00:12:48.744 }' 00:12:48.744 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.744 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.312 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:49.312 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.312 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.312 [2024-12-06 11:55:46.815409] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:49.312 [2024-12-06 11:55:46.815501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:12:49.312 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.312 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:49.312 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.312 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.312 [2024-12-06 11:55:46.827488] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:49.312 [2024-12-06 11:55:46.829216] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:49.312 [2024-12-06 11:55:46.829271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:49.312 [2024-12-06 11:55:46.829281] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:49.312 [2024-12-06 11:55:46.829291] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:49.312 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.312 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:49.312 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:49.312 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:49.312 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.312 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.312 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:49.312 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.312 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.312 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.312 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.312 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.312 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.312 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.312 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.312 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.312 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.312 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.313 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.313 "name": "Existed_Raid", 00:12:49.313 "uuid": "02c4ef69-1492-4f92-8448-501636375510", 00:12:49.313 "strip_size_kb": 64, 00:12:49.313 "state": "configuring", 00:12:49.313 "raid_level": "raid5f", 00:12:49.313 "superblock": true, 00:12:49.313 "num_base_bdevs": 3, 00:12:49.313 "num_base_bdevs_discovered": 1, 00:12:49.313 "num_base_bdevs_operational": 3, 00:12:49.313 "base_bdevs_list": [ 00:12:49.313 { 00:12:49.313 "name": "BaseBdev1", 00:12:49.313 "uuid": "41c96f58-df8b-4b12-819c-d6558b4fcf58", 00:12:49.313 "is_configured": true, 00:12:49.313 "data_offset": 2048, 00:12:49.313 "data_size": 63488 00:12:49.313 }, 00:12:49.313 { 00:12:49.313 "name": "BaseBdev2", 00:12:49.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.313 "is_configured": false, 00:12:49.313 "data_offset": 0, 00:12:49.313 "data_size": 0 00:12:49.313 }, 00:12:49.313 { 00:12:49.313 "name": "BaseBdev3", 00:12:49.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.313 "is_configured": false, 00:12:49.313 "data_offset": 0, 00:12:49.313 "data_size": 0 00:12:49.313 } 00:12:49.313 ] 00:12:49.313 }' 00:12:49.313 11:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.313 11:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.573 [2024-12-06 11:55:47.233862] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:49.573 BaseBdev2 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.573 [ 00:12:49.573 { 00:12:49.573 "name": "BaseBdev2", 00:12:49.573 "aliases": [ 00:12:49.573 "a36c4753-70ca-431f-abc3-c0bf23c1278e" 00:12:49.573 ], 00:12:49.573 "product_name": "Malloc disk", 00:12:49.573 "block_size": 512, 00:12:49.573 "num_blocks": 65536, 00:12:49.573 "uuid": "a36c4753-70ca-431f-abc3-c0bf23c1278e", 00:12:49.573 "assigned_rate_limits": { 00:12:49.573 "rw_ios_per_sec": 0, 00:12:49.573 "rw_mbytes_per_sec": 0, 00:12:49.573 "r_mbytes_per_sec": 0, 00:12:49.573 "w_mbytes_per_sec": 0 00:12:49.573 }, 00:12:49.573 "claimed": true, 00:12:49.573 "claim_type": "exclusive_write", 00:12:49.573 "zoned": false, 00:12:49.573 "supported_io_types": { 00:12:49.573 "read": true, 00:12:49.573 "write": true, 00:12:49.573 "unmap": true, 00:12:49.573 "flush": true, 00:12:49.573 "reset": true, 00:12:49.573 "nvme_admin": false, 00:12:49.573 "nvme_io": false, 00:12:49.573 "nvme_io_md": false, 00:12:49.573 "write_zeroes": true, 00:12:49.573 "zcopy": true, 00:12:49.573 "get_zone_info": false, 00:12:49.573 "zone_management": false, 00:12:49.573 "zone_append": false, 00:12:49.573 "compare": false, 00:12:49.573 "compare_and_write": false, 00:12:49.573 "abort": true, 00:12:49.573 "seek_hole": false, 00:12:49.573 "seek_data": false, 00:12:49.573 "copy": true, 00:12:49.573 "nvme_iov_md": false 00:12:49.573 }, 00:12:49.573 "memory_domains": [ 00:12:49.573 { 00:12:49.573 "dma_device_id": "system", 00:12:49.573 "dma_device_type": 1 00:12:49.573 }, 00:12:49.573 { 00:12:49.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.573 "dma_device_type": 2 00:12:49.573 } 00:12:49.573 ], 00:12:49.573 "driver_specific": {} 00:12:49.573 } 00:12:49.573 ] 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.573 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.573 "name": "Existed_Raid", 00:12:49.573 "uuid": "02c4ef69-1492-4f92-8448-501636375510", 00:12:49.573 "strip_size_kb": 64, 00:12:49.573 "state": "configuring", 00:12:49.573 "raid_level": "raid5f", 00:12:49.573 "superblock": true, 00:12:49.573 "num_base_bdevs": 3, 00:12:49.573 "num_base_bdevs_discovered": 2, 00:12:49.573 "num_base_bdevs_operational": 3, 00:12:49.574 "base_bdevs_list": [ 00:12:49.574 { 00:12:49.574 "name": "BaseBdev1", 00:12:49.574 "uuid": "41c96f58-df8b-4b12-819c-d6558b4fcf58", 00:12:49.574 "is_configured": true, 00:12:49.574 "data_offset": 2048, 00:12:49.574 "data_size": 63488 00:12:49.574 }, 00:12:49.574 { 00:12:49.574 "name": "BaseBdev2", 00:12:49.574 "uuid": "a36c4753-70ca-431f-abc3-c0bf23c1278e", 00:12:49.574 "is_configured": true, 00:12:49.574 "data_offset": 2048, 00:12:49.574 "data_size": 63488 00:12:49.574 }, 00:12:49.574 { 00:12:49.574 "name": "BaseBdev3", 00:12:49.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.574 "is_configured": false, 00:12:49.574 "data_offset": 0, 00:12:49.574 "data_size": 0 00:12:49.574 } 00:12:49.574 ] 00:12:49.574 }' 00:12:49.574 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.574 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.144 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:50.144 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.144 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.144 [2024-12-06 11:55:47.723916] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:50.144 [2024-12-06 11:55:47.724114] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:12:50.144 [2024-12-06 11:55:47.724133] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:50.144 BaseBdev3 00:12:50.144 [2024-12-06 11:55:47.724424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:12:50.144 [2024-12-06 11:55:47.724843] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:12:50.144 [2024-12-06 11:55:47.724856] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:12:50.144 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.144 [2024-12-06 11:55:47.724978] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.144 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:50.144 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:50.144 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:50.144 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:50.144 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:50.144 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:50.144 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:50.144 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.144 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.144 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.144 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:50.144 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.144 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.144 [ 00:12:50.144 { 00:12:50.144 "name": "BaseBdev3", 00:12:50.144 "aliases": [ 00:12:50.144 "d5c29b0f-8b1d-4d40-a770-e29c438ca803" 00:12:50.144 ], 00:12:50.144 "product_name": "Malloc disk", 00:12:50.144 "block_size": 512, 00:12:50.144 "num_blocks": 65536, 00:12:50.144 "uuid": "d5c29b0f-8b1d-4d40-a770-e29c438ca803", 00:12:50.144 "assigned_rate_limits": { 00:12:50.144 "rw_ios_per_sec": 0, 00:12:50.144 "rw_mbytes_per_sec": 0, 00:12:50.144 "r_mbytes_per_sec": 0, 00:12:50.144 "w_mbytes_per_sec": 0 00:12:50.144 }, 00:12:50.144 "claimed": true, 00:12:50.144 "claim_type": "exclusive_write", 00:12:50.144 "zoned": false, 00:12:50.144 "supported_io_types": { 00:12:50.144 "read": true, 00:12:50.145 "write": true, 00:12:50.145 "unmap": true, 00:12:50.145 "flush": true, 00:12:50.145 "reset": true, 00:12:50.145 "nvme_admin": false, 00:12:50.145 "nvme_io": false, 00:12:50.145 "nvme_io_md": false, 00:12:50.145 "write_zeroes": true, 00:12:50.145 "zcopy": true, 00:12:50.145 "get_zone_info": false, 00:12:50.145 "zone_management": false, 00:12:50.145 "zone_append": false, 00:12:50.145 "compare": false, 00:12:50.145 "compare_and_write": false, 00:12:50.145 "abort": true, 00:12:50.145 "seek_hole": false, 00:12:50.145 "seek_data": false, 00:12:50.145 "copy": true, 00:12:50.145 "nvme_iov_md": false 00:12:50.145 }, 00:12:50.145 "memory_domains": [ 00:12:50.145 { 00:12:50.145 "dma_device_id": "system", 00:12:50.145 "dma_device_type": 1 00:12:50.145 }, 00:12:50.145 { 00:12:50.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.145 "dma_device_type": 2 00:12:50.145 } 00:12:50.145 ], 00:12:50.145 "driver_specific": {} 00:12:50.145 } 00:12:50.145 ] 00:12:50.145 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.145 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:50.145 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:50.145 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:50.145 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:50.145 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.145 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.145 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:50.145 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.145 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.145 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.145 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.145 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.145 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.145 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.145 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.145 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.145 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.145 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.145 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.145 "name": "Existed_Raid", 00:12:50.145 "uuid": "02c4ef69-1492-4f92-8448-501636375510", 00:12:50.145 "strip_size_kb": 64, 00:12:50.145 "state": "online", 00:12:50.145 "raid_level": "raid5f", 00:12:50.145 "superblock": true, 00:12:50.145 "num_base_bdevs": 3, 00:12:50.145 "num_base_bdevs_discovered": 3, 00:12:50.145 "num_base_bdevs_operational": 3, 00:12:50.145 "base_bdevs_list": [ 00:12:50.145 { 00:12:50.145 "name": "BaseBdev1", 00:12:50.145 "uuid": "41c96f58-df8b-4b12-819c-d6558b4fcf58", 00:12:50.145 "is_configured": true, 00:12:50.145 "data_offset": 2048, 00:12:50.145 "data_size": 63488 00:12:50.145 }, 00:12:50.145 { 00:12:50.145 "name": "BaseBdev2", 00:12:50.145 "uuid": "a36c4753-70ca-431f-abc3-c0bf23c1278e", 00:12:50.145 "is_configured": true, 00:12:50.145 "data_offset": 2048, 00:12:50.145 "data_size": 63488 00:12:50.145 }, 00:12:50.145 { 00:12:50.145 "name": "BaseBdev3", 00:12:50.145 "uuid": "d5c29b0f-8b1d-4d40-a770-e29c438ca803", 00:12:50.145 "is_configured": true, 00:12:50.145 "data_offset": 2048, 00:12:50.145 "data_size": 63488 00:12:50.145 } 00:12:50.145 ] 00:12:50.145 }' 00:12:50.145 11:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.145 11:55:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.715 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:50.715 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:50.715 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:50.715 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:50.715 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:50.715 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:50.715 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:50.716 [2024-12-06 11:55:48.223305] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:50.716 "name": "Existed_Raid", 00:12:50.716 "aliases": [ 00:12:50.716 "02c4ef69-1492-4f92-8448-501636375510" 00:12:50.716 ], 00:12:50.716 "product_name": "Raid Volume", 00:12:50.716 "block_size": 512, 00:12:50.716 "num_blocks": 126976, 00:12:50.716 "uuid": "02c4ef69-1492-4f92-8448-501636375510", 00:12:50.716 "assigned_rate_limits": { 00:12:50.716 "rw_ios_per_sec": 0, 00:12:50.716 "rw_mbytes_per_sec": 0, 00:12:50.716 "r_mbytes_per_sec": 0, 00:12:50.716 "w_mbytes_per_sec": 0 00:12:50.716 }, 00:12:50.716 "claimed": false, 00:12:50.716 "zoned": false, 00:12:50.716 "supported_io_types": { 00:12:50.716 "read": true, 00:12:50.716 "write": true, 00:12:50.716 "unmap": false, 00:12:50.716 "flush": false, 00:12:50.716 "reset": true, 00:12:50.716 "nvme_admin": false, 00:12:50.716 "nvme_io": false, 00:12:50.716 "nvme_io_md": false, 00:12:50.716 "write_zeroes": true, 00:12:50.716 "zcopy": false, 00:12:50.716 "get_zone_info": false, 00:12:50.716 "zone_management": false, 00:12:50.716 "zone_append": false, 00:12:50.716 "compare": false, 00:12:50.716 "compare_and_write": false, 00:12:50.716 "abort": false, 00:12:50.716 "seek_hole": false, 00:12:50.716 "seek_data": false, 00:12:50.716 "copy": false, 00:12:50.716 "nvme_iov_md": false 00:12:50.716 }, 00:12:50.716 "driver_specific": { 00:12:50.716 "raid": { 00:12:50.716 "uuid": "02c4ef69-1492-4f92-8448-501636375510", 00:12:50.716 "strip_size_kb": 64, 00:12:50.716 "state": "online", 00:12:50.716 "raid_level": "raid5f", 00:12:50.716 "superblock": true, 00:12:50.716 "num_base_bdevs": 3, 00:12:50.716 "num_base_bdevs_discovered": 3, 00:12:50.716 "num_base_bdevs_operational": 3, 00:12:50.716 "base_bdevs_list": [ 00:12:50.716 { 00:12:50.716 "name": "BaseBdev1", 00:12:50.716 "uuid": "41c96f58-df8b-4b12-819c-d6558b4fcf58", 00:12:50.716 "is_configured": true, 00:12:50.716 "data_offset": 2048, 00:12:50.716 "data_size": 63488 00:12:50.716 }, 00:12:50.716 { 00:12:50.716 "name": "BaseBdev2", 00:12:50.716 "uuid": "a36c4753-70ca-431f-abc3-c0bf23c1278e", 00:12:50.716 "is_configured": true, 00:12:50.716 "data_offset": 2048, 00:12:50.716 "data_size": 63488 00:12:50.716 }, 00:12:50.716 { 00:12:50.716 "name": "BaseBdev3", 00:12:50.716 "uuid": "d5c29b0f-8b1d-4d40-a770-e29c438ca803", 00:12:50.716 "is_configured": true, 00:12:50.716 "data_offset": 2048, 00:12:50.716 "data_size": 63488 00:12:50.716 } 00:12:50.716 ] 00:12:50.716 } 00:12:50.716 } 00:12:50.716 }' 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:50.716 BaseBdev2 00:12:50.716 BaseBdev3' 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.716 11:55:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.716 [2024-12-06 11:55:48.462751] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:50.976 11:55:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.976 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:50.976 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:12:50.976 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:50.976 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:50.976 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:50.976 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:12:50.976 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.976 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.976 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:50.976 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.976 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:50.976 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.976 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.976 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.976 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.977 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.977 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.977 11:55:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.977 11:55:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.977 11:55:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.977 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.977 "name": "Existed_Raid", 00:12:50.977 "uuid": "02c4ef69-1492-4f92-8448-501636375510", 00:12:50.977 "strip_size_kb": 64, 00:12:50.977 "state": "online", 00:12:50.977 "raid_level": "raid5f", 00:12:50.977 "superblock": true, 00:12:50.977 "num_base_bdevs": 3, 00:12:50.977 "num_base_bdevs_discovered": 2, 00:12:50.977 "num_base_bdevs_operational": 2, 00:12:50.977 "base_bdevs_list": [ 00:12:50.977 { 00:12:50.977 "name": null, 00:12:50.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.977 "is_configured": false, 00:12:50.977 "data_offset": 0, 00:12:50.977 "data_size": 63488 00:12:50.977 }, 00:12:50.977 { 00:12:50.977 "name": "BaseBdev2", 00:12:50.977 "uuid": "a36c4753-70ca-431f-abc3-c0bf23c1278e", 00:12:50.977 "is_configured": true, 00:12:50.977 "data_offset": 2048, 00:12:50.977 "data_size": 63488 00:12:50.977 }, 00:12:50.977 { 00:12:50.977 "name": "BaseBdev3", 00:12:50.977 "uuid": "d5c29b0f-8b1d-4d40-a770-e29c438ca803", 00:12:50.977 "is_configured": true, 00:12:50.977 "data_offset": 2048, 00:12:50.977 "data_size": 63488 00:12:50.977 } 00:12:50.977 ] 00:12:50.977 }' 00:12:50.977 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.977 11:55:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.237 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:51.237 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:51.237 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:51.237 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.237 11:55:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.237 11:55:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.237 11:55:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.237 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:51.237 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:51.237 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:51.237 11:55:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.237 11:55:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.237 [2024-12-06 11:55:48.961377] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:51.237 [2024-12-06 11:55:48.961564] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:51.237 [2024-12-06 11:55:48.972871] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:51.237 11:55:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.237 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:51.237 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:51.237 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:51.237 11:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.237 11:55:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.237 11:55:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.498 11:55:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.498 [2024-12-06 11:55:49.020829] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:51.498 [2024-12-06 11:55:49.020936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.498 BaseBdev2 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.498 [ 00:12:51.498 { 00:12:51.498 "name": "BaseBdev2", 00:12:51.498 "aliases": [ 00:12:51.498 "bbbc0d39-2d1d-4def-8e57-fbb9b89b0507" 00:12:51.498 ], 00:12:51.498 "product_name": "Malloc disk", 00:12:51.498 "block_size": 512, 00:12:51.498 "num_blocks": 65536, 00:12:51.498 "uuid": "bbbc0d39-2d1d-4def-8e57-fbb9b89b0507", 00:12:51.498 "assigned_rate_limits": { 00:12:51.498 "rw_ios_per_sec": 0, 00:12:51.498 "rw_mbytes_per_sec": 0, 00:12:51.498 "r_mbytes_per_sec": 0, 00:12:51.498 "w_mbytes_per_sec": 0 00:12:51.498 }, 00:12:51.498 "claimed": false, 00:12:51.498 "zoned": false, 00:12:51.498 "supported_io_types": { 00:12:51.498 "read": true, 00:12:51.498 "write": true, 00:12:51.498 "unmap": true, 00:12:51.498 "flush": true, 00:12:51.498 "reset": true, 00:12:51.498 "nvme_admin": false, 00:12:51.498 "nvme_io": false, 00:12:51.498 "nvme_io_md": false, 00:12:51.498 "write_zeroes": true, 00:12:51.498 "zcopy": true, 00:12:51.498 "get_zone_info": false, 00:12:51.498 "zone_management": false, 00:12:51.498 "zone_append": false, 00:12:51.498 "compare": false, 00:12:51.498 "compare_and_write": false, 00:12:51.498 "abort": true, 00:12:51.498 "seek_hole": false, 00:12:51.498 "seek_data": false, 00:12:51.498 "copy": true, 00:12:51.498 "nvme_iov_md": false 00:12:51.498 }, 00:12:51.498 "memory_domains": [ 00:12:51.498 { 00:12:51.498 "dma_device_id": "system", 00:12:51.498 "dma_device_type": 1 00:12:51.498 }, 00:12:51.498 { 00:12:51.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.498 "dma_device_type": 2 00:12:51.498 } 00:12:51.498 ], 00:12:51.498 "driver_specific": {} 00:12:51.498 } 00:12:51.498 ] 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.498 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.498 BaseBdev3 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.499 [ 00:12:51.499 { 00:12:51.499 "name": "BaseBdev3", 00:12:51.499 "aliases": [ 00:12:51.499 "99545a90-4079-498a-8f78-2fc4ad1df70b" 00:12:51.499 ], 00:12:51.499 "product_name": "Malloc disk", 00:12:51.499 "block_size": 512, 00:12:51.499 "num_blocks": 65536, 00:12:51.499 "uuid": "99545a90-4079-498a-8f78-2fc4ad1df70b", 00:12:51.499 "assigned_rate_limits": { 00:12:51.499 "rw_ios_per_sec": 0, 00:12:51.499 "rw_mbytes_per_sec": 0, 00:12:51.499 "r_mbytes_per_sec": 0, 00:12:51.499 "w_mbytes_per_sec": 0 00:12:51.499 }, 00:12:51.499 "claimed": false, 00:12:51.499 "zoned": false, 00:12:51.499 "supported_io_types": { 00:12:51.499 "read": true, 00:12:51.499 "write": true, 00:12:51.499 "unmap": true, 00:12:51.499 "flush": true, 00:12:51.499 "reset": true, 00:12:51.499 "nvme_admin": false, 00:12:51.499 "nvme_io": false, 00:12:51.499 "nvme_io_md": false, 00:12:51.499 "write_zeroes": true, 00:12:51.499 "zcopy": true, 00:12:51.499 "get_zone_info": false, 00:12:51.499 "zone_management": false, 00:12:51.499 "zone_append": false, 00:12:51.499 "compare": false, 00:12:51.499 "compare_and_write": false, 00:12:51.499 "abort": true, 00:12:51.499 "seek_hole": false, 00:12:51.499 "seek_data": false, 00:12:51.499 "copy": true, 00:12:51.499 "nvme_iov_md": false 00:12:51.499 }, 00:12:51.499 "memory_domains": [ 00:12:51.499 { 00:12:51.499 "dma_device_id": "system", 00:12:51.499 "dma_device_type": 1 00:12:51.499 }, 00:12:51.499 { 00:12:51.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.499 "dma_device_type": 2 00:12:51.499 } 00:12:51.499 ], 00:12:51.499 "driver_specific": {} 00:12:51.499 } 00:12:51.499 ] 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.499 [2024-12-06 11:55:49.191111] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:51.499 [2024-12-06 11:55:49.191244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:51.499 [2024-12-06 11:55:49.191286] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:51.499 [2024-12-06 11:55:49.193071] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.499 "name": "Existed_Raid", 00:12:51.499 "uuid": "95fb92af-ae20-4e62-9eef-0e33f5ff7797", 00:12:51.499 "strip_size_kb": 64, 00:12:51.499 "state": "configuring", 00:12:51.499 "raid_level": "raid5f", 00:12:51.499 "superblock": true, 00:12:51.499 "num_base_bdevs": 3, 00:12:51.499 "num_base_bdevs_discovered": 2, 00:12:51.499 "num_base_bdevs_operational": 3, 00:12:51.499 "base_bdevs_list": [ 00:12:51.499 { 00:12:51.499 "name": "BaseBdev1", 00:12:51.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.499 "is_configured": false, 00:12:51.499 "data_offset": 0, 00:12:51.499 "data_size": 0 00:12:51.499 }, 00:12:51.499 { 00:12:51.499 "name": "BaseBdev2", 00:12:51.499 "uuid": "bbbc0d39-2d1d-4def-8e57-fbb9b89b0507", 00:12:51.499 "is_configured": true, 00:12:51.499 "data_offset": 2048, 00:12:51.499 "data_size": 63488 00:12:51.499 }, 00:12:51.499 { 00:12:51.499 "name": "BaseBdev3", 00:12:51.499 "uuid": "99545a90-4079-498a-8f78-2fc4ad1df70b", 00:12:51.499 "is_configured": true, 00:12:51.499 "data_offset": 2048, 00:12:51.499 "data_size": 63488 00:12:51.499 } 00:12:51.499 ] 00:12:51.499 }' 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.499 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.070 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:52.070 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.070 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.070 [2024-12-06 11:55:49.662349] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:52.070 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.070 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:52.070 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.070 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.070 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:52.070 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.070 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.070 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.070 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.070 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.070 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.070 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.070 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.070 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.070 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.070 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.070 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.070 "name": "Existed_Raid", 00:12:52.070 "uuid": "95fb92af-ae20-4e62-9eef-0e33f5ff7797", 00:12:52.070 "strip_size_kb": 64, 00:12:52.070 "state": "configuring", 00:12:52.070 "raid_level": "raid5f", 00:12:52.070 "superblock": true, 00:12:52.070 "num_base_bdevs": 3, 00:12:52.070 "num_base_bdevs_discovered": 1, 00:12:52.070 "num_base_bdevs_operational": 3, 00:12:52.070 "base_bdevs_list": [ 00:12:52.070 { 00:12:52.070 "name": "BaseBdev1", 00:12:52.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.070 "is_configured": false, 00:12:52.070 "data_offset": 0, 00:12:52.070 "data_size": 0 00:12:52.070 }, 00:12:52.070 { 00:12:52.070 "name": null, 00:12:52.070 "uuid": "bbbc0d39-2d1d-4def-8e57-fbb9b89b0507", 00:12:52.070 "is_configured": false, 00:12:52.070 "data_offset": 0, 00:12:52.070 "data_size": 63488 00:12:52.070 }, 00:12:52.070 { 00:12:52.070 "name": "BaseBdev3", 00:12:52.070 "uuid": "99545a90-4079-498a-8f78-2fc4ad1df70b", 00:12:52.070 "is_configured": true, 00:12:52.070 "data_offset": 2048, 00:12:52.070 "data_size": 63488 00:12:52.070 } 00:12:52.070 ] 00:12:52.070 }' 00:12:52.070 11:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.070 11:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.331 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.331 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:52.331 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.331 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.591 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.591 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:52.591 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:52.591 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.591 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.591 [2024-12-06 11:55:50.140584] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:52.591 BaseBdev1 00:12:52.591 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.591 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:52.591 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:52.591 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:52.591 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:52.591 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:52.591 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:52.591 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:52.591 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.591 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.591 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.591 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:52.591 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.592 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.592 [ 00:12:52.592 { 00:12:52.592 "name": "BaseBdev1", 00:12:52.592 "aliases": [ 00:12:52.592 "a624d62f-81f4-4f1f-9a3d-9687af3eeadb" 00:12:52.592 ], 00:12:52.592 "product_name": "Malloc disk", 00:12:52.592 "block_size": 512, 00:12:52.592 "num_blocks": 65536, 00:12:52.592 "uuid": "a624d62f-81f4-4f1f-9a3d-9687af3eeadb", 00:12:52.592 "assigned_rate_limits": { 00:12:52.592 "rw_ios_per_sec": 0, 00:12:52.592 "rw_mbytes_per_sec": 0, 00:12:52.592 "r_mbytes_per_sec": 0, 00:12:52.592 "w_mbytes_per_sec": 0 00:12:52.592 }, 00:12:52.592 "claimed": true, 00:12:52.592 "claim_type": "exclusive_write", 00:12:52.592 "zoned": false, 00:12:52.592 "supported_io_types": { 00:12:52.592 "read": true, 00:12:52.592 "write": true, 00:12:52.592 "unmap": true, 00:12:52.592 "flush": true, 00:12:52.592 "reset": true, 00:12:52.592 "nvme_admin": false, 00:12:52.592 "nvme_io": false, 00:12:52.592 "nvme_io_md": false, 00:12:52.592 "write_zeroes": true, 00:12:52.592 "zcopy": true, 00:12:52.592 "get_zone_info": false, 00:12:52.592 "zone_management": false, 00:12:52.592 "zone_append": false, 00:12:52.592 "compare": false, 00:12:52.592 "compare_and_write": false, 00:12:52.592 "abort": true, 00:12:52.592 "seek_hole": false, 00:12:52.592 "seek_data": false, 00:12:52.592 "copy": true, 00:12:52.592 "nvme_iov_md": false 00:12:52.592 }, 00:12:52.592 "memory_domains": [ 00:12:52.592 { 00:12:52.592 "dma_device_id": "system", 00:12:52.592 "dma_device_type": 1 00:12:52.592 }, 00:12:52.592 { 00:12:52.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.592 "dma_device_type": 2 00:12:52.592 } 00:12:52.592 ], 00:12:52.592 "driver_specific": {} 00:12:52.592 } 00:12:52.592 ] 00:12:52.592 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.592 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:52.592 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:52.592 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.592 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.592 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:52.592 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.592 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.592 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.592 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.592 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.592 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.592 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.592 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.592 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.592 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.592 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.592 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.592 "name": "Existed_Raid", 00:12:52.592 "uuid": "95fb92af-ae20-4e62-9eef-0e33f5ff7797", 00:12:52.592 "strip_size_kb": 64, 00:12:52.592 "state": "configuring", 00:12:52.592 "raid_level": "raid5f", 00:12:52.592 "superblock": true, 00:12:52.592 "num_base_bdevs": 3, 00:12:52.592 "num_base_bdevs_discovered": 2, 00:12:52.592 "num_base_bdevs_operational": 3, 00:12:52.592 "base_bdevs_list": [ 00:12:52.592 { 00:12:52.592 "name": "BaseBdev1", 00:12:52.592 "uuid": "a624d62f-81f4-4f1f-9a3d-9687af3eeadb", 00:12:52.592 "is_configured": true, 00:12:52.592 "data_offset": 2048, 00:12:52.592 "data_size": 63488 00:12:52.592 }, 00:12:52.592 { 00:12:52.592 "name": null, 00:12:52.592 "uuid": "bbbc0d39-2d1d-4def-8e57-fbb9b89b0507", 00:12:52.592 "is_configured": false, 00:12:52.592 "data_offset": 0, 00:12:52.592 "data_size": 63488 00:12:52.592 }, 00:12:52.592 { 00:12:52.592 "name": "BaseBdev3", 00:12:52.592 "uuid": "99545a90-4079-498a-8f78-2fc4ad1df70b", 00:12:52.592 "is_configured": true, 00:12:52.592 "data_offset": 2048, 00:12:52.592 "data_size": 63488 00:12:52.592 } 00:12:52.592 ] 00:12:52.592 }' 00:12:52.592 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.592 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.162 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:53.162 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.162 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.162 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.162 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.162 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:53.162 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:53.162 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.162 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.162 [2024-12-06 11:55:50.663713] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:53.162 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.162 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:53.162 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.162 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.162 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:53.162 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.162 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:53.162 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.163 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.163 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.163 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.163 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.163 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.163 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.163 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.163 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.163 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.163 "name": "Existed_Raid", 00:12:53.163 "uuid": "95fb92af-ae20-4e62-9eef-0e33f5ff7797", 00:12:53.163 "strip_size_kb": 64, 00:12:53.163 "state": "configuring", 00:12:53.163 "raid_level": "raid5f", 00:12:53.163 "superblock": true, 00:12:53.163 "num_base_bdevs": 3, 00:12:53.163 "num_base_bdevs_discovered": 1, 00:12:53.163 "num_base_bdevs_operational": 3, 00:12:53.163 "base_bdevs_list": [ 00:12:53.163 { 00:12:53.163 "name": "BaseBdev1", 00:12:53.163 "uuid": "a624d62f-81f4-4f1f-9a3d-9687af3eeadb", 00:12:53.163 "is_configured": true, 00:12:53.163 "data_offset": 2048, 00:12:53.163 "data_size": 63488 00:12:53.163 }, 00:12:53.163 { 00:12:53.163 "name": null, 00:12:53.163 "uuid": "bbbc0d39-2d1d-4def-8e57-fbb9b89b0507", 00:12:53.163 "is_configured": false, 00:12:53.163 "data_offset": 0, 00:12:53.163 "data_size": 63488 00:12:53.163 }, 00:12:53.163 { 00:12:53.163 "name": null, 00:12:53.163 "uuid": "99545a90-4079-498a-8f78-2fc4ad1df70b", 00:12:53.163 "is_configured": false, 00:12:53.163 "data_offset": 0, 00:12:53.163 "data_size": 63488 00:12:53.163 } 00:12:53.163 ] 00:12:53.163 }' 00:12:53.163 11:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.163 11:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.423 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.423 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:53.423 11:55:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.424 11:55:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.424 11:55:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.683 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:53.683 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:53.683 11:55:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.683 11:55:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.684 [2024-12-06 11:55:51.186838] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:53.684 11:55:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.684 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:53.684 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.684 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.684 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:53.684 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.684 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:53.684 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.684 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.684 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.684 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.684 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.684 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.684 11:55:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.684 11:55:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.684 11:55:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.684 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.684 "name": "Existed_Raid", 00:12:53.684 "uuid": "95fb92af-ae20-4e62-9eef-0e33f5ff7797", 00:12:53.684 "strip_size_kb": 64, 00:12:53.684 "state": "configuring", 00:12:53.684 "raid_level": "raid5f", 00:12:53.684 "superblock": true, 00:12:53.684 "num_base_bdevs": 3, 00:12:53.684 "num_base_bdevs_discovered": 2, 00:12:53.684 "num_base_bdevs_operational": 3, 00:12:53.684 "base_bdevs_list": [ 00:12:53.684 { 00:12:53.684 "name": "BaseBdev1", 00:12:53.684 "uuid": "a624d62f-81f4-4f1f-9a3d-9687af3eeadb", 00:12:53.684 "is_configured": true, 00:12:53.684 "data_offset": 2048, 00:12:53.684 "data_size": 63488 00:12:53.684 }, 00:12:53.684 { 00:12:53.684 "name": null, 00:12:53.684 "uuid": "bbbc0d39-2d1d-4def-8e57-fbb9b89b0507", 00:12:53.684 "is_configured": false, 00:12:53.684 "data_offset": 0, 00:12:53.684 "data_size": 63488 00:12:53.684 }, 00:12:53.684 { 00:12:53.684 "name": "BaseBdev3", 00:12:53.684 "uuid": "99545a90-4079-498a-8f78-2fc4ad1df70b", 00:12:53.684 "is_configured": true, 00:12:53.684 "data_offset": 2048, 00:12:53.684 "data_size": 63488 00:12:53.684 } 00:12:53.684 ] 00:12:53.684 }' 00:12:53.684 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.684 11:55:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.943 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.943 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:53.943 11:55:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.943 11:55:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.943 11:55:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.943 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:53.943 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:53.943 11:55:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.943 11:55:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.943 [2024-12-06 11:55:51.693992] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:54.212 11:55:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.212 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:54.212 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.212 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.212 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:54.212 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.212 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:54.212 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.212 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.212 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.212 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.212 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.212 11:55:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.212 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.212 11:55:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.212 11:55:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.212 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.212 "name": "Existed_Raid", 00:12:54.212 "uuid": "95fb92af-ae20-4e62-9eef-0e33f5ff7797", 00:12:54.212 "strip_size_kb": 64, 00:12:54.212 "state": "configuring", 00:12:54.212 "raid_level": "raid5f", 00:12:54.212 "superblock": true, 00:12:54.212 "num_base_bdevs": 3, 00:12:54.212 "num_base_bdevs_discovered": 1, 00:12:54.212 "num_base_bdevs_operational": 3, 00:12:54.212 "base_bdevs_list": [ 00:12:54.212 { 00:12:54.212 "name": null, 00:12:54.212 "uuid": "a624d62f-81f4-4f1f-9a3d-9687af3eeadb", 00:12:54.212 "is_configured": false, 00:12:54.212 "data_offset": 0, 00:12:54.212 "data_size": 63488 00:12:54.212 }, 00:12:54.212 { 00:12:54.213 "name": null, 00:12:54.213 "uuid": "bbbc0d39-2d1d-4def-8e57-fbb9b89b0507", 00:12:54.213 "is_configured": false, 00:12:54.213 "data_offset": 0, 00:12:54.213 "data_size": 63488 00:12:54.213 }, 00:12:54.213 { 00:12:54.213 "name": "BaseBdev3", 00:12:54.213 "uuid": "99545a90-4079-498a-8f78-2fc4ad1df70b", 00:12:54.213 "is_configured": true, 00:12:54.213 "data_offset": 2048, 00:12:54.213 "data_size": 63488 00:12:54.213 } 00:12:54.213 ] 00:12:54.213 }' 00:12:54.213 11:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.213 11:55:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.491 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:54.491 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.491 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.491 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.491 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.491 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:54.491 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:54.491 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.491 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.491 [2024-12-06 11:55:52.211825] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:54.491 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.491 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:54.491 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.491 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.491 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:54.491 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.491 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:54.491 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.491 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.491 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.491 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.491 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.491 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.491 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.491 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.766 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.766 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.766 "name": "Existed_Raid", 00:12:54.766 "uuid": "95fb92af-ae20-4e62-9eef-0e33f5ff7797", 00:12:54.766 "strip_size_kb": 64, 00:12:54.766 "state": "configuring", 00:12:54.766 "raid_level": "raid5f", 00:12:54.766 "superblock": true, 00:12:54.766 "num_base_bdevs": 3, 00:12:54.766 "num_base_bdevs_discovered": 2, 00:12:54.766 "num_base_bdevs_operational": 3, 00:12:54.766 "base_bdevs_list": [ 00:12:54.766 { 00:12:54.766 "name": null, 00:12:54.766 "uuid": "a624d62f-81f4-4f1f-9a3d-9687af3eeadb", 00:12:54.766 "is_configured": false, 00:12:54.766 "data_offset": 0, 00:12:54.766 "data_size": 63488 00:12:54.766 }, 00:12:54.766 { 00:12:54.766 "name": "BaseBdev2", 00:12:54.766 "uuid": "bbbc0d39-2d1d-4def-8e57-fbb9b89b0507", 00:12:54.766 "is_configured": true, 00:12:54.766 "data_offset": 2048, 00:12:54.766 "data_size": 63488 00:12:54.766 }, 00:12:54.766 { 00:12:54.766 "name": "BaseBdev3", 00:12:54.766 "uuid": "99545a90-4079-498a-8f78-2fc4ad1df70b", 00:12:54.766 "is_configured": true, 00:12:54.766 "data_offset": 2048, 00:12:54.766 "data_size": 63488 00:12:54.766 } 00:12:54.766 ] 00:12:54.766 }' 00:12:54.766 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.766 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.026 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.026 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:55.026 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.026 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.026 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.026 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:55.026 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.026 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:55.026 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.026 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.026 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.026 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a624d62f-81f4-4f1f-9a3d-9687af3eeadb 00:12:55.026 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.026 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.026 [2024-12-06 11:55:52.682158] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:55.026 [2024-12-06 11:55:52.682326] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:12:55.026 [2024-12-06 11:55:52.682342] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:55.026 [2024-12-06 11:55:52.682585] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:12:55.026 NewBaseBdev 00:12:55.026 [2024-12-06 11:55:52.682978] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:12:55.026 [2024-12-06 11:55:52.682999] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:12:55.026 [2024-12-06 11:55:52.683103] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.026 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.026 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.027 [ 00:12:55.027 { 00:12:55.027 "name": "NewBaseBdev", 00:12:55.027 "aliases": [ 00:12:55.027 "a624d62f-81f4-4f1f-9a3d-9687af3eeadb" 00:12:55.027 ], 00:12:55.027 "product_name": "Malloc disk", 00:12:55.027 "block_size": 512, 00:12:55.027 "num_blocks": 65536, 00:12:55.027 "uuid": "a624d62f-81f4-4f1f-9a3d-9687af3eeadb", 00:12:55.027 "assigned_rate_limits": { 00:12:55.027 "rw_ios_per_sec": 0, 00:12:55.027 "rw_mbytes_per_sec": 0, 00:12:55.027 "r_mbytes_per_sec": 0, 00:12:55.027 "w_mbytes_per_sec": 0 00:12:55.027 }, 00:12:55.027 "claimed": true, 00:12:55.027 "claim_type": "exclusive_write", 00:12:55.027 "zoned": false, 00:12:55.027 "supported_io_types": { 00:12:55.027 "read": true, 00:12:55.027 "write": true, 00:12:55.027 "unmap": true, 00:12:55.027 "flush": true, 00:12:55.027 "reset": true, 00:12:55.027 "nvme_admin": false, 00:12:55.027 "nvme_io": false, 00:12:55.027 "nvme_io_md": false, 00:12:55.027 "write_zeroes": true, 00:12:55.027 "zcopy": true, 00:12:55.027 "get_zone_info": false, 00:12:55.027 "zone_management": false, 00:12:55.027 "zone_append": false, 00:12:55.027 "compare": false, 00:12:55.027 "compare_and_write": false, 00:12:55.027 "abort": true, 00:12:55.027 "seek_hole": false, 00:12:55.027 "seek_data": false, 00:12:55.027 "copy": true, 00:12:55.027 "nvme_iov_md": false 00:12:55.027 }, 00:12:55.027 "memory_domains": [ 00:12:55.027 { 00:12:55.027 "dma_device_id": "system", 00:12:55.027 "dma_device_type": 1 00:12:55.027 }, 00:12:55.027 { 00:12:55.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.027 "dma_device_type": 2 00:12:55.027 } 00:12:55.027 ], 00:12:55.027 "driver_specific": {} 00:12:55.027 } 00:12:55.027 ] 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.027 "name": "Existed_Raid", 00:12:55.027 "uuid": "95fb92af-ae20-4e62-9eef-0e33f5ff7797", 00:12:55.027 "strip_size_kb": 64, 00:12:55.027 "state": "online", 00:12:55.027 "raid_level": "raid5f", 00:12:55.027 "superblock": true, 00:12:55.027 "num_base_bdevs": 3, 00:12:55.027 "num_base_bdevs_discovered": 3, 00:12:55.027 "num_base_bdevs_operational": 3, 00:12:55.027 "base_bdevs_list": [ 00:12:55.027 { 00:12:55.027 "name": "NewBaseBdev", 00:12:55.027 "uuid": "a624d62f-81f4-4f1f-9a3d-9687af3eeadb", 00:12:55.027 "is_configured": true, 00:12:55.027 "data_offset": 2048, 00:12:55.027 "data_size": 63488 00:12:55.027 }, 00:12:55.027 { 00:12:55.027 "name": "BaseBdev2", 00:12:55.027 "uuid": "bbbc0d39-2d1d-4def-8e57-fbb9b89b0507", 00:12:55.027 "is_configured": true, 00:12:55.027 "data_offset": 2048, 00:12:55.027 "data_size": 63488 00:12:55.027 }, 00:12:55.027 { 00:12:55.027 "name": "BaseBdev3", 00:12:55.027 "uuid": "99545a90-4079-498a-8f78-2fc4ad1df70b", 00:12:55.027 "is_configured": true, 00:12:55.027 "data_offset": 2048, 00:12:55.027 "data_size": 63488 00:12:55.027 } 00:12:55.027 ] 00:12:55.027 }' 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.027 11:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.598 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:55.598 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:55.598 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:55.598 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:55.598 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:55.598 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:55.598 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:55.598 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:55.598 11:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.598 11:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.598 [2024-12-06 11:55:53.113631] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:55.598 11:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.598 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:55.598 "name": "Existed_Raid", 00:12:55.598 "aliases": [ 00:12:55.598 "95fb92af-ae20-4e62-9eef-0e33f5ff7797" 00:12:55.598 ], 00:12:55.598 "product_name": "Raid Volume", 00:12:55.598 "block_size": 512, 00:12:55.598 "num_blocks": 126976, 00:12:55.598 "uuid": "95fb92af-ae20-4e62-9eef-0e33f5ff7797", 00:12:55.598 "assigned_rate_limits": { 00:12:55.598 "rw_ios_per_sec": 0, 00:12:55.598 "rw_mbytes_per_sec": 0, 00:12:55.598 "r_mbytes_per_sec": 0, 00:12:55.598 "w_mbytes_per_sec": 0 00:12:55.598 }, 00:12:55.598 "claimed": false, 00:12:55.598 "zoned": false, 00:12:55.598 "supported_io_types": { 00:12:55.598 "read": true, 00:12:55.598 "write": true, 00:12:55.598 "unmap": false, 00:12:55.598 "flush": false, 00:12:55.598 "reset": true, 00:12:55.598 "nvme_admin": false, 00:12:55.598 "nvme_io": false, 00:12:55.598 "nvme_io_md": false, 00:12:55.598 "write_zeroes": true, 00:12:55.598 "zcopy": false, 00:12:55.598 "get_zone_info": false, 00:12:55.598 "zone_management": false, 00:12:55.598 "zone_append": false, 00:12:55.598 "compare": false, 00:12:55.598 "compare_and_write": false, 00:12:55.598 "abort": false, 00:12:55.598 "seek_hole": false, 00:12:55.598 "seek_data": false, 00:12:55.598 "copy": false, 00:12:55.598 "nvme_iov_md": false 00:12:55.598 }, 00:12:55.598 "driver_specific": { 00:12:55.598 "raid": { 00:12:55.598 "uuid": "95fb92af-ae20-4e62-9eef-0e33f5ff7797", 00:12:55.598 "strip_size_kb": 64, 00:12:55.598 "state": "online", 00:12:55.598 "raid_level": "raid5f", 00:12:55.598 "superblock": true, 00:12:55.598 "num_base_bdevs": 3, 00:12:55.598 "num_base_bdevs_discovered": 3, 00:12:55.598 "num_base_bdevs_operational": 3, 00:12:55.598 "base_bdevs_list": [ 00:12:55.598 { 00:12:55.598 "name": "NewBaseBdev", 00:12:55.598 "uuid": "a624d62f-81f4-4f1f-9a3d-9687af3eeadb", 00:12:55.598 "is_configured": true, 00:12:55.598 "data_offset": 2048, 00:12:55.598 "data_size": 63488 00:12:55.598 }, 00:12:55.598 { 00:12:55.598 "name": "BaseBdev2", 00:12:55.598 "uuid": "bbbc0d39-2d1d-4def-8e57-fbb9b89b0507", 00:12:55.598 "is_configured": true, 00:12:55.598 "data_offset": 2048, 00:12:55.598 "data_size": 63488 00:12:55.598 }, 00:12:55.598 { 00:12:55.598 "name": "BaseBdev3", 00:12:55.598 "uuid": "99545a90-4079-498a-8f78-2fc4ad1df70b", 00:12:55.598 "is_configured": true, 00:12:55.599 "data_offset": 2048, 00:12:55.599 "data_size": 63488 00:12:55.599 } 00:12:55.599 ] 00:12:55.599 } 00:12:55.599 } 00:12:55.599 }' 00:12:55.599 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:55.599 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:55.599 BaseBdev2 00:12:55.599 BaseBdev3' 00:12:55.599 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.599 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:55.599 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.599 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:55.599 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.599 11:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.599 11:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.599 11:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.599 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.599 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.599 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.599 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.599 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:55.599 11:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.599 11:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.599 11:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.860 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.860 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.860 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.860 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.860 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:55.860 11:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.860 11:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.860 11:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.860 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.860 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.860 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:55.860 11:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.860 11:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.860 [2024-12-06 11:55:53.392944] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:55.860 [2024-12-06 11:55:53.392969] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:55.860 [2024-12-06 11:55:53.393037] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:55.860 [2024-12-06 11:55:53.393282] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:55.860 [2024-12-06 11:55:53.393295] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:12:55.860 11:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.860 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 90653 00:12:55.860 11:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 90653 ']' 00:12:55.860 11:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 90653 00:12:55.860 11:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:55.860 11:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:55.860 11:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90653 00:12:55.860 11:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:55.860 11:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:55.860 11:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90653' 00:12:55.860 killing process with pid 90653 00:12:55.860 11:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 90653 00:12:55.860 [2024-12-06 11:55:53.444012] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:55.860 11:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 90653 00:12:55.860 [2024-12-06 11:55:53.475857] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:56.121 11:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:56.121 00:12:56.121 real 0m8.769s 00:12:56.121 user 0m14.908s 00:12:56.121 sys 0m1.930s 00:12:56.121 11:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:56.121 11:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.121 ************************************ 00:12:56.121 END TEST raid5f_state_function_test_sb 00:12:56.121 ************************************ 00:12:56.121 11:55:53 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:12:56.121 11:55:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:56.121 11:55:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:56.121 11:55:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:56.121 ************************************ 00:12:56.121 START TEST raid5f_superblock_test 00:12:56.121 ************************************ 00:12:56.121 11:55:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:12:56.121 11:55:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:12:56.121 11:55:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:56.121 11:55:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:56.121 11:55:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:56.121 11:55:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:56.121 11:55:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:56.121 11:55:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:56.121 11:55:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:56.121 11:55:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:56.121 11:55:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:56.121 11:55:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:56.121 11:55:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:56.121 11:55:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:56.121 11:55:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:12:56.121 11:55:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:56.121 11:55:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:56.121 11:55:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=91257 00:12:56.121 11:55:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:56.121 11:55:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 91257 00:12:56.121 11:55:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 91257 ']' 00:12:56.121 11:55:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.121 11:55:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:56.121 11:55:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.121 11:55:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:56.121 11:55:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.381 [2024-12-06 11:55:53.888647] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:56.381 [2024-12-06 11:55:53.888761] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91257 ] 00:12:56.381 [2024-12-06 11:55:54.034903] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.381 [2024-12-06 11:55:54.082577] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.382 [2024-12-06 11:55:54.125991] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:56.382 [2024-12-06 11:55:54.126032] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:56.951 11:55:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:56.951 11:55:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:12:56.951 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:56.951 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:56.951 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:56.951 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:56.951 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:56.951 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:56.951 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:56.951 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:56.951 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:56.951 11:55:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.951 11:55:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.211 malloc1 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.211 [2024-12-06 11:55:54.728731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:57.211 [2024-12-06 11:55:54.728886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.211 [2024-12-06 11:55:54.728921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:57.211 [2024-12-06 11:55:54.728954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.211 [2024-12-06 11:55:54.730966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.211 [2024-12-06 11:55:54.731039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:57.211 pt1 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.211 malloc2 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.211 [2024-12-06 11:55:54.769729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:57.211 [2024-12-06 11:55:54.769922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.211 [2024-12-06 11:55:54.769999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:57.211 [2024-12-06 11:55:54.770079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.211 [2024-12-06 11:55:54.774630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.211 [2024-12-06 11:55:54.774779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:57.211 pt2 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.211 malloc3 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.211 [2024-12-06 11:55:54.804691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:57.211 [2024-12-06 11:55:54.804814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.211 [2024-12-06 11:55:54.804848] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:57.211 [2024-12-06 11:55:54.804878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.211 [2024-12-06 11:55:54.806902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.211 [2024-12-06 11:55:54.806991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:57.211 pt3 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.211 11:55:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.211 [2024-12-06 11:55:54.816762] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:57.211 [2024-12-06 11:55:54.818609] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:57.211 [2024-12-06 11:55:54.818717] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:57.211 [2024-12-06 11:55:54.818890] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:57.212 [2024-12-06 11:55:54.818907] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:57.212 [2024-12-06 11:55:54.819156] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:12:57.212 [2024-12-06 11:55:54.819624] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:57.212 [2024-12-06 11:55:54.819647] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:57.212 [2024-12-06 11:55:54.819764] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.212 11:55:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.212 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:12:57.212 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.212 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.212 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:57.212 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.212 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:57.212 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.212 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.212 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.212 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.212 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.212 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.212 11:55:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.212 11:55:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.212 11:55:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.212 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.212 "name": "raid_bdev1", 00:12:57.212 "uuid": "66cf44d3-f3d4-4c16-8e92-6027906de5a8", 00:12:57.212 "strip_size_kb": 64, 00:12:57.212 "state": "online", 00:12:57.212 "raid_level": "raid5f", 00:12:57.212 "superblock": true, 00:12:57.212 "num_base_bdevs": 3, 00:12:57.212 "num_base_bdevs_discovered": 3, 00:12:57.212 "num_base_bdevs_operational": 3, 00:12:57.212 "base_bdevs_list": [ 00:12:57.212 { 00:12:57.212 "name": "pt1", 00:12:57.212 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:57.212 "is_configured": true, 00:12:57.212 "data_offset": 2048, 00:12:57.212 "data_size": 63488 00:12:57.212 }, 00:12:57.212 { 00:12:57.212 "name": "pt2", 00:12:57.212 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:57.212 "is_configured": true, 00:12:57.212 "data_offset": 2048, 00:12:57.212 "data_size": 63488 00:12:57.212 }, 00:12:57.212 { 00:12:57.212 "name": "pt3", 00:12:57.212 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:57.212 "is_configured": true, 00:12:57.212 "data_offset": 2048, 00:12:57.212 "data_size": 63488 00:12:57.212 } 00:12:57.212 ] 00:12:57.212 }' 00:12:57.212 11:55:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.212 11:55:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.780 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:57.780 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:57.780 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:57.780 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:57.780 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:57.780 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:57.780 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:57.780 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:57.780 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.780 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.780 [2024-12-06 11:55:55.252555] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.780 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.780 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:57.780 "name": "raid_bdev1", 00:12:57.780 "aliases": [ 00:12:57.780 "66cf44d3-f3d4-4c16-8e92-6027906de5a8" 00:12:57.780 ], 00:12:57.780 "product_name": "Raid Volume", 00:12:57.780 "block_size": 512, 00:12:57.780 "num_blocks": 126976, 00:12:57.780 "uuid": "66cf44d3-f3d4-4c16-8e92-6027906de5a8", 00:12:57.780 "assigned_rate_limits": { 00:12:57.780 "rw_ios_per_sec": 0, 00:12:57.780 "rw_mbytes_per_sec": 0, 00:12:57.780 "r_mbytes_per_sec": 0, 00:12:57.780 "w_mbytes_per_sec": 0 00:12:57.780 }, 00:12:57.780 "claimed": false, 00:12:57.780 "zoned": false, 00:12:57.780 "supported_io_types": { 00:12:57.780 "read": true, 00:12:57.780 "write": true, 00:12:57.780 "unmap": false, 00:12:57.780 "flush": false, 00:12:57.780 "reset": true, 00:12:57.780 "nvme_admin": false, 00:12:57.780 "nvme_io": false, 00:12:57.780 "nvme_io_md": false, 00:12:57.780 "write_zeroes": true, 00:12:57.780 "zcopy": false, 00:12:57.780 "get_zone_info": false, 00:12:57.780 "zone_management": false, 00:12:57.780 "zone_append": false, 00:12:57.780 "compare": false, 00:12:57.780 "compare_and_write": false, 00:12:57.780 "abort": false, 00:12:57.780 "seek_hole": false, 00:12:57.780 "seek_data": false, 00:12:57.780 "copy": false, 00:12:57.780 "nvme_iov_md": false 00:12:57.780 }, 00:12:57.780 "driver_specific": { 00:12:57.780 "raid": { 00:12:57.780 "uuid": "66cf44d3-f3d4-4c16-8e92-6027906de5a8", 00:12:57.780 "strip_size_kb": 64, 00:12:57.780 "state": "online", 00:12:57.780 "raid_level": "raid5f", 00:12:57.780 "superblock": true, 00:12:57.780 "num_base_bdevs": 3, 00:12:57.780 "num_base_bdevs_discovered": 3, 00:12:57.780 "num_base_bdevs_operational": 3, 00:12:57.780 "base_bdevs_list": [ 00:12:57.780 { 00:12:57.780 "name": "pt1", 00:12:57.780 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:57.780 "is_configured": true, 00:12:57.780 "data_offset": 2048, 00:12:57.780 "data_size": 63488 00:12:57.780 }, 00:12:57.780 { 00:12:57.780 "name": "pt2", 00:12:57.780 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:57.780 "is_configured": true, 00:12:57.780 "data_offset": 2048, 00:12:57.780 "data_size": 63488 00:12:57.780 }, 00:12:57.780 { 00:12:57.780 "name": "pt3", 00:12:57.780 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:57.780 "is_configured": true, 00:12:57.780 "data_offset": 2048, 00:12:57.780 "data_size": 63488 00:12:57.780 } 00:12:57.780 ] 00:12:57.780 } 00:12:57.780 } 00:12:57.780 }' 00:12:57.780 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:57.780 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:57.780 pt2 00:12:57.780 pt3' 00:12:57.780 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.780 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:57.780 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.780 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:57.780 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.780 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.780 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.780 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.780 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.780 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.780 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.780 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:57.781 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.781 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.781 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.781 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.781 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.781 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.781 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.781 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:57.781 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.781 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.781 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.781 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.781 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.781 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.781 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:57.781 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.781 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.781 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:57.781 [2024-12-06 11:55:55.512074] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.781 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=66cf44d3-f3d4-4c16-8e92-6027906de5a8 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 66cf44d3-f3d4-4c16-8e92-6027906de5a8 ']' 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.040 [2024-12-06 11:55:55.563812] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:58.040 [2024-12-06 11:55:55.563881] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:58.040 [2024-12-06 11:55:55.563982] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:58.040 [2024-12-06 11:55:55.564052] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:58.040 [2024-12-06 11:55:55.564072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:58.040 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:58.041 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.041 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.041 [2024-12-06 11:55:55.723580] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:58.041 [2024-12-06 11:55:55.725452] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:58.041 [2024-12-06 11:55:55.725496] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:58.041 [2024-12-06 11:55:55.725542] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:58.041 [2024-12-06 11:55:55.725581] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:58.041 [2024-12-06 11:55:55.725600] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:58.041 [2024-12-06 11:55:55.725612] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:58.041 [2024-12-06 11:55:55.725624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:12:58.041 request: 00:12:58.041 { 00:12:58.041 "name": "raid_bdev1", 00:12:58.041 "raid_level": "raid5f", 00:12:58.041 "base_bdevs": [ 00:12:58.041 "malloc1", 00:12:58.041 "malloc2", 00:12:58.041 "malloc3" 00:12:58.041 ], 00:12:58.041 "strip_size_kb": 64, 00:12:58.041 "superblock": false, 00:12:58.041 "method": "bdev_raid_create", 00:12:58.041 "req_id": 1 00:12:58.041 } 00:12:58.041 Got JSON-RPC error response 00:12:58.041 response: 00:12:58.041 { 00:12:58.041 "code": -17, 00:12:58.041 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:58.041 } 00:12:58.041 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:58.041 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:58.041 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:58.041 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:58.041 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:58.041 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:58.041 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.041 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.041 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.041 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.041 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:58.041 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:58.041 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:58.041 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.041 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.041 [2024-12-06 11:55:55.787441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:58.041 [2024-12-06 11:55:55.787535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.041 [2024-12-06 11:55:55.787567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:58.041 [2024-12-06 11:55:55.787598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.041 [2024-12-06 11:55:55.789719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.041 [2024-12-06 11:55:55.789807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:58.041 [2024-12-06 11:55:55.789889] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:58.041 [2024-12-06 11:55:55.789965] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:58.041 pt1 00:12:58.041 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.041 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:12:58.041 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.041 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.041 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:58.041 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.041 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:58.041 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.041 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.041 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.301 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.301 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.301 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.301 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.301 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.301 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.301 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.301 "name": "raid_bdev1", 00:12:58.301 "uuid": "66cf44d3-f3d4-4c16-8e92-6027906de5a8", 00:12:58.301 "strip_size_kb": 64, 00:12:58.301 "state": "configuring", 00:12:58.301 "raid_level": "raid5f", 00:12:58.301 "superblock": true, 00:12:58.301 "num_base_bdevs": 3, 00:12:58.301 "num_base_bdevs_discovered": 1, 00:12:58.301 "num_base_bdevs_operational": 3, 00:12:58.301 "base_bdevs_list": [ 00:12:58.301 { 00:12:58.301 "name": "pt1", 00:12:58.301 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:58.301 "is_configured": true, 00:12:58.301 "data_offset": 2048, 00:12:58.301 "data_size": 63488 00:12:58.301 }, 00:12:58.301 { 00:12:58.301 "name": null, 00:12:58.301 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:58.301 "is_configured": false, 00:12:58.301 "data_offset": 2048, 00:12:58.301 "data_size": 63488 00:12:58.301 }, 00:12:58.301 { 00:12:58.301 "name": null, 00:12:58.301 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:58.301 "is_configured": false, 00:12:58.301 "data_offset": 2048, 00:12:58.301 "data_size": 63488 00:12:58.301 } 00:12:58.301 ] 00:12:58.301 }' 00:12:58.301 11:55:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.301 11:55:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.561 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:12:58.561 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:58.561 11:55:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.561 11:55:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.561 [2024-12-06 11:55:56.238730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:58.561 [2024-12-06 11:55:56.238789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.561 [2024-12-06 11:55:56.238807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:58.561 [2024-12-06 11:55:56.238818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.561 [2024-12-06 11:55:56.239135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.561 [2024-12-06 11:55:56.239153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:58.561 [2024-12-06 11:55:56.239206] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:58.561 [2024-12-06 11:55:56.239227] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:58.561 pt2 00:12:58.561 11:55:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.561 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:58.561 11:55:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.561 11:55:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.561 [2024-12-06 11:55:56.250719] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:58.561 11:55:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.561 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:12:58.561 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.561 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.561 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:58.562 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.562 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:58.562 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.562 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.562 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.562 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.562 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.562 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.562 11:55:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.562 11:55:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.562 11:55:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.562 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.562 "name": "raid_bdev1", 00:12:58.562 "uuid": "66cf44d3-f3d4-4c16-8e92-6027906de5a8", 00:12:58.562 "strip_size_kb": 64, 00:12:58.562 "state": "configuring", 00:12:58.562 "raid_level": "raid5f", 00:12:58.562 "superblock": true, 00:12:58.562 "num_base_bdevs": 3, 00:12:58.562 "num_base_bdevs_discovered": 1, 00:12:58.562 "num_base_bdevs_operational": 3, 00:12:58.562 "base_bdevs_list": [ 00:12:58.562 { 00:12:58.562 "name": "pt1", 00:12:58.562 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:58.562 "is_configured": true, 00:12:58.562 "data_offset": 2048, 00:12:58.562 "data_size": 63488 00:12:58.562 }, 00:12:58.562 { 00:12:58.562 "name": null, 00:12:58.562 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:58.562 "is_configured": false, 00:12:58.562 "data_offset": 0, 00:12:58.562 "data_size": 63488 00:12:58.562 }, 00:12:58.562 { 00:12:58.562 "name": null, 00:12:58.562 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:58.562 "is_configured": false, 00:12:58.562 "data_offset": 2048, 00:12:58.562 "data_size": 63488 00:12:58.562 } 00:12:58.562 ] 00:12:58.562 }' 00:12:58.562 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.562 11:55:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.134 [2024-12-06 11:55:56.694025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:59.134 [2024-12-06 11:55:56.694127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.134 [2024-12-06 11:55:56.694164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:59.134 [2024-12-06 11:55:56.694190] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.134 [2024-12-06 11:55:56.694547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.134 [2024-12-06 11:55:56.694604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:59.134 [2024-12-06 11:55:56.694688] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:59.134 [2024-12-06 11:55:56.694734] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:59.134 pt2 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.134 [2024-12-06 11:55:56.706004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:59.134 [2024-12-06 11:55:56.706109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.134 [2024-12-06 11:55:56.706144] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:59.134 [2024-12-06 11:55:56.706175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.134 [2024-12-06 11:55:56.706520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.134 [2024-12-06 11:55:56.706578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:59.134 [2024-12-06 11:55:56.706664] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:59.134 [2024-12-06 11:55:56.706727] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:59.134 [2024-12-06 11:55:56.706856] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:12:59.134 [2024-12-06 11:55:56.706892] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:59.134 [2024-12-06 11:55:56.707138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:12:59.134 [2024-12-06 11:55:56.707576] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:12:59.134 [2024-12-06 11:55:56.707596] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:12:59.134 [2024-12-06 11:55:56.707690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.134 pt3 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.134 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.134 "name": "raid_bdev1", 00:12:59.134 "uuid": "66cf44d3-f3d4-4c16-8e92-6027906de5a8", 00:12:59.134 "strip_size_kb": 64, 00:12:59.135 "state": "online", 00:12:59.135 "raid_level": "raid5f", 00:12:59.135 "superblock": true, 00:12:59.135 "num_base_bdevs": 3, 00:12:59.135 "num_base_bdevs_discovered": 3, 00:12:59.135 "num_base_bdevs_operational": 3, 00:12:59.135 "base_bdevs_list": [ 00:12:59.135 { 00:12:59.135 "name": "pt1", 00:12:59.135 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:59.135 "is_configured": true, 00:12:59.135 "data_offset": 2048, 00:12:59.135 "data_size": 63488 00:12:59.135 }, 00:12:59.135 { 00:12:59.135 "name": "pt2", 00:12:59.135 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:59.135 "is_configured": true, 00:12:59.135 "data_offset": 2048, 00:12:59.135 "data_size": 63488 00:12:59.135 }, 00:12:59.135 { 00:12:59.135 "name": "pt3", 00:12:59.135 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:59.135 "is_configured": true, 00:12:59.135 "data_offset": 2048, 00:12:59.135 "data_size": 63488 00:12:59.135 } 00:12:59.135 ] 00:12:59.135 }' 00:12:59.135 11:55:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.135 11:55:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:59.705 [2024-12-06 11:55:57.181366] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:59.705 "name": "raid_bdev1", 00:12:59.705 "aliases": [ 00:12:59.705 "66cf44d3-f3d4-4c16-8e92-6027906de5a8" 00:12:59.705 ], 00:12:59.705 "product_name": "Raid Volume", 00:12:59.705 "block_size": 512, 00:12:59.705 "num_blocks": 126976, 00:12:59.705 "uuid": "66cf44d3-f3d4-4c16-8e92-6027906de5a8", 00:12:59.705 "assigned_rate_limits": { 00:12:59.705 "rw_ios_per_sec": 0, 00:12:59.705 "rw_mbytes_per_sec": 0, 00:12:59.705 "r_mbytes_per_sec": 0, 00:12:59.705 "w_mbytes_per_sec": 0 00:12:59.705 }, 00:12:59.705 "claimed": false, 00:12:59.705 "zoned": false, 00:12:59.705 "supported_io_types": { 00:12:59.705 "read": true, 00:12:59.705 "write": true, 00:12:59.705 "unmap": false, 00:12:59.705 "flush": false, 00:12:59.705 "reset": true, 00:12:59.705 "nvme_admin": false, 00:12:59.705 "nvme_io": false, 00:12:59.705 "nvme_io_md": false, 00:12:59.705 "write_zeroes": true, 00:12:59.705 "zcopy": false, 00:12:59.705 "get_zone_info": false, 00:12:59.705 "zone_management": false, 00:12:59.705 "zone_append": false, 00:12:59.705 "compare": false, 00:12:59.705 "compare_and_write": false, 00:12:59.705 "abort": false, 00:12:59.705 "seek_hole": false, 00:12:59.705 "seek_data": false, 00:12:59.705 "copy": false, 00:12:59.705 "nvme_iov_md": false 00:12:59.705 }, 00:12:59.705 "driver_specific": { 00:12:59.705 "raid": { 00:12:59.705 "uuid": "66cf44d3-f3d4-4c16-8e92-6027906de5a8", 00:12:59.705 "strip_size_kb": 64, 00:12:59.705 "state": "online", 00:12:59.705 "raid_level": "raid5f", 00:12:59.705 "superblock": true, 00:12:59.705 "num_base_bdevs": 3, 00:12:59.705 "num_base_bdevs_discovered": 3, 00:12:59.705 "num_base_bdevs_operational": 3, 00:12:59.705 "base_bdevs_list": [ 00:12:59.705 { 00:12:59.705 "name": "pt1", 00:12:59.705 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:59.705 "is_configured": true, 00:12:59.705 "data_offset": 2048, 00:12:59.705 "data_size": 63488 00:12:59.705 }, 00:12:59.705 { 00:12:59.705 "name": "pt2", 00:12:59.705 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:59.705 "is_configured": true, 00:12:59.705 "data_offset": 2048, 00:12:59.705 "data_size": 63488 00:12:59.705 }, 00:12:59.705 { 00:12:59.705 "name": "pt3", 00:12:59.705 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:59.705 "is_configured": true, 00:12:59.705 "data_offset": 2048, 00:12:59.705 "data_size": 63488 00:12:59.705 } 00:12:59.705 ] 00:12:59.705 } 00:12:59.705 } 00:12:59.705 }' 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:59.705 pt2 00:12:59.705 pt3' 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.705 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.706 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:59.706 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.706 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.706 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.706 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.706 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.706 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.706 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:59.706 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:59.706 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.706 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.706 [2024-12-06 11:55:57.448862] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:59.967 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.967 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 66cf44d3-f3d4-4c16-8e92-6027906de5a8 '!=' 66cf44d3-f3d4-4c16-8e92-6027906de5a8 ']' 00:12:59.967 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:12:59.967 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:59.967 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:59.967 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:59.967 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.967 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.967 [2024-12-06 11:55:57.476700] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:59.967 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.967 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:12:59.967 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.967 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.967 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:59.967 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.967 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:59.967 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.967 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.967 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.967 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.967 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.967 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.967 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.967 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.967 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.967 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.967 "name": "raid_bdev1", 00:12:59.967 "uuid": "66cf44d3-f3d4-4c16-8e92-6027906de5a8", 00:12:59.967 "strip_size_kb": 64, 00:12:59.967 "state": "online", 00:12:59.967 "raid_level": "raid5f", 00:12:59.967 "superblock": true, 00:12:59.967 "num_base_bdevs": 3, 00:12:59.967 "num_base_bdevs_discovered": 2, 00:12:59.967 "num_base_bdevs_operational": 2, 00:12:59.967 "base_bdevs_list": [ 00:12:59.967 { 00:12:59.967 "name": null, 00:12:59.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.967 "is_configured": false, 00:12:59.967 "data_offset": 0, 00:12:59.967 "data_size": 63488 00:12:59.967 }, 00:12:59.967 { 00:12:59.967 "name": "pt2", 00:12:59.967 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:59.967 "is_configured": true, 00:12:59.967 "data_offset": 2048, 00:12:59.967 "data_size": 63488 00:12:59.967 }, 00:12:59.967 { 00:12:59.967 "name": "pt3", 00:12:59.967 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:59.967 "is_configured": true, 00:12:59.967 "data_offset": 2048, 00:12:59.967 "data_size": 63488 00:12:59.967 } 00:12:59.967 ] 00:12:59.967 }' 00:12:59.967 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.967 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.228 [2024-12-06 11:55:57.903916] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:00.228 [2024-12-06 11:55:57.903996] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:00.228 [2024-12-06 11:55:57.904086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:00.228 [2024-12-06 11:55:57.904153] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:00.228 [2024-12-06 11:55:57.904194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.228 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.489 [2024-12-06 11:55:57.987768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:00.489 [2024-12-06 11:55:57.987821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.489 [2024-12-06 11:55:57.987840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:13:00.489 [2024-12-06 11:55:57.987848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.489 [2024-12-06 11:55:57.989856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.489 [2024-12-06 11:55:57.989892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:00.489 [2024-12-06 11:55:57.989956] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:00.489 [2024-12-06 11:55:57.990010] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:00.489 pt2 00:13:00.489 11:55:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.489 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:00.489 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.489 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.489 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:00.489 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.489 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:00.489 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.489 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.489 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.489 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.489 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.489 11:55:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.489 11:55:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.489 11:55:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.489 11:55:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.489 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.489 "name": "raid_bdev1", 00:13:00.489 "uuid": "66cf44d3-f3d4-4c16-8e92-6027906de5a8", 00:13:00.489 "strip_size_kb": 64, 00:13:00.489 "state": "configuring", 00:13:00.489 "raid_level": "raid5f", 00:13:00.489 "superblock": true, 00:13:00.489 "num_base_bdevs": 3, 00:13:00.489 "num_base_bdevs_discovered": 1, 00:13:00.489 "num_base_bdevs_operational": 2, 00:13:00.489 "base_bdevs_list": [ 00:13:00.489 { 00:13:00.489 "name": null, 00:13:00.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.489 "is_configured": false, 00:13:00.489 "data_offset": 2048, 00:13:00.489 "data_size": 63488 00:13:00.489 }, 00:13:00.489 { 00:13:00.489 "name": "pt2", 00:13:00.489 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:00.489 "is_configured": true, 00:13:00.489 "data_offset": 2048, 00:13:00.489 "data_size": 63488 00:13:00.490 }, 00:13:00.490 { 00:13:00.490 "name": null, 00:13:00.490 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:00.490 "is_configured": false, 00:13:00.490 "data_offset": 2048, 00:13:00.490 "data_size": 63488 00:13:00.490 } 00:13:00.490 ] 00:13:00.490 }' 00:13:00.490 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.490 11:55:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.750 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:00.750 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:00.750 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:13:00.750 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:00.750 11:55:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.750 11:55:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.750 [2024-12-06 11:55:58.466998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:00.750 [2024-12-06 11:55:58.467096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.750 [2024-12-06 11:55:58.467131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:00.750 [2024-12-06 11:55:58.467159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.750 [2024-12-06 11:55:58.467506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.750 [2024-12-06 11:55:58.467570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:00.750 [2024-12-06 11:55:58.467652] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:00.750 [2024-12-06 11:55:58.467696] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:00.750 [2024-12-06 11:55:58.467793] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:13:00.750 [2024-12-06 11:55:58.467827] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:00.750 [2024-12-06 11:55:58.468068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:13:00.750 [2024-12-06 11:55:58.468560] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:13:00.750 [2024-12-06 11:55:58.468613] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:13:00.750 [2024-12-06 11:55:58.468875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.750 pt3 00:13:00.750 11:55:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.750 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:00.750 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.750 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.750 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:00.750 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.750 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:00.750 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.750 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.750 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.750 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.750 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.750 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.750 11:55:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.750 11:55:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.750 11:55:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.011 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.011 "name": "raid_bdev1", 00:13:01.011 "uuid": "66cf44d3-f3d4-4c16-8e92-6027906de5a8", 00:13:01.011 "strip_size_kb": 64, 00:13:01.011 "state": "online", 00:13:01.011 "raid_level": "raid5f", 00:13:01.012 "superblock": true, 00:13:01.012 "num_base_bdevs": 3, 00:13:01.012 "num_base_bdevs_discovered": 2, 00:13:01.012 "num_base_bdevs_operational": 2, 00:13:01.012 "base_bdevs_list": [ 00:13:01.012 { 00:13:01.012 "name": null, 00:13:01.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.012 "is_configured": false, 00:13:01.012 "data_offset": 2048, 00:13:01.012 "data_size": 63488 00:13:01.012 }, 00:13:01.012 { 00:13:01.012 "name": "pt2", 00:13:01.012 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:01.012 "is_configured": true, 00:13:01.012 "data_offset": 2048, 00:13:01.012 "data_size": 63488 00:13:01.012 }, 00:13:01.012 { 00:13:01.012 "name": "pt3", 00:13:01.012 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:01.012 "is_configured": true, 00:13:01.012 "data_offset": 2048, 00:13:01.012 "data_size": 63488 00:13:01.012 } 00:13:01.012 ] 00:13:01.012 }' 00:13:01.012 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.012 11:55:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.273 [2024-12-06 11:55:58.906245] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:01.273 [2024-12-06 11:55:58.906284] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:01.273 [2024-12-06 11:55:58.906350] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:01.273 [2024-12-06 11:55:58.906403] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:01.273 [2024-12-06 11:55:58.906414] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.273 [2024-12-06 11:55:58.982098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:01.273 [2024-12-06 11:55:58.982151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.273 [2024-12-06 11:55:58.982166] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:01.273 [2024-12-06 11:55:58.982176] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.273 [2024-12-06 11:55:58.984252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.273 [2024-12-06 11:55:58.984300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:01.273 [2024-12-06 11:55:58.984362] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:01.273 [2024-12-06 11:55:58.984404] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:01.273 [2024-12-06 11:55:58.984501] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:01.273 [2024-12-06 11:55:58.984516] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:01.273 [2024-12-06 11:55:58.984530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:13:01.273 [2024-12-06 11:55:58.984559] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:01.273 pt1 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.273 11:55:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.273 11:55:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.534 11:55:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.534 "name": "raid_bdev1", 00:13:01.534 "uuid": "66cf44d3-f3d4-4c16-8e92-6027906de5a8", 00:13:01.534 "strip_size_kb": 64, 00:13:01.534 "state": "configuring", 00:13:01.534 "raid_level": "raid5f", 00:13:01.534 "superblock": true, 00:13:01.534 "num_base_bdevs": 3, 00:13:01.534 "num_base_bdevs_discovered": 1, 00:13:01.534 "num_base_bdevs_operational": 2, 00:13:01.534 "base_bdevs_list": [ 00:13:01.534 { 00:13:01.534 "name": null, 00:13:01.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.534 "is_configured": false, 00:13:01.534 "data_offset": 2048, 00:13:01.534 "data_size": 63488 00:13:01.534 }, 00:13:01.534 { 00:13:01.534 "name": "pt2", 00:13:01.534 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:01.534 "is_configured": true, 00:13:01.534 "data_offset": 2048, 00:13:01.534 "data_size": 63488 00:13:01.534 }, 00:13:01.534 { 00:13:01.534 "name": null, 00:13:01.534 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:01.534 "is_configured": false, 00:13:01.534 "data_offset": 2048, 00:13:01.534 "data_size": 63488 00:13:01.534 } 00:13:01.534 ] 00:13:01.534 }' 00:13:01.534 11:55:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.534 11:55:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.795 11:55:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:01.795 11:55:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.795 11:55:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.795 11:55:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:01.795 11:55:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.795 11:55:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:01.795 11:55:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:01.795 11:55:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.795 11:55:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.795 [2024-12-06 11:55:59.489230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:01.795 [2024-12-06 11:55:59.489291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.795 [2024-12-06 11:55:59.489306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:01.795 [2024-12-06 11:55:59.489317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.795 [2024-12-06 11:55:59.489637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.795 [2024-12-06 11:55:59.489659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:01.795 [2024-12-06 11:55:59.489719] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:01.795 [2024-12-06 11:55:59.489747] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:01.795 [2024-12-06 11:55:59.489829] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:13:01.795 [2024-12-06 11:55:59.489839] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:01.795 [2024-12-06 11:55:59.490045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:13:01.795 [2024-12-06 11:55:59.490471] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:13:01.795 [2024-12-06 11:55:59.490483] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:13:01.795 [2024-12-06 11:55:59.490627] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.795 pt3 00:13:01.795 11:55:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.795 11:55:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:01.795 11:55:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.795 11:55:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.795 11:55:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:01.795 11:55:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.795 11:55:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:01.795 11:55:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.795 11:55:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.795 11:55:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.795 11:55:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.795 11:55:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.795 11:55:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.795 11:55:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.795 11:55:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.795 11:55:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.795 11:55:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.795 "name": "raid_bdev1", 00:13:01.795 "uuid": "66cf44d3-f3d4-4c16-8e92-6027906de5a8", 00:13:01.795 "strip_size_kb": 64, 00:13:01.795 "state": "online", 00:13:01.795 "raid_level": "raid5f", 00:13:01.795 "superblock": true, 00:13:01.795 "num_base_bdevs": 3, 00:13:01.795 "num_base_bdevs_discovered": 2, 00:13:01.795 "num_base_bdevs_operational": 2, 00:13:01.795 "base_bdevs_list": [ 00:13:01.795 { 00:13:01.795 "name": null, 00:13:01.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.795 "is_configured": false, 00:13:01.795 "data_offset": 2048, 00:13:01.795 "data_size": 63488 00:13:01.795 }, 00:13:01.795 { 00:13:01.795 "name": "pt2", 00:13:01.795 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:01.795 "is_configured": true, 00:13:01.795 "data_offset": 2048, 00:13:01.795 "data_size": 63488 00:13:01.795 }, 00:13:01.795 { 00:13:01.795 "name": "pt3", 00:13:01.795 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:01.795 "is_configured": true, 00:13:01.795 "data_offset": 2048, 00:13:01.795 "data_size": 63488 00:13:01.795 } 00:13:01.795 ] 00:13:01.795 }' 00:13:01.795 11:55:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.795 11:55:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.367 11:55:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:02.367 11:55:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.367 11:55:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.367 11:55:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:02.367 11:55:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.367 11:55:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:02.367 11:55:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:02.367 11:55:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:02.367 11:55:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.367 11:55:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.367 [2024-12-06 11:55:59.956650] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:02.367 11:55:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.367 11:55:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 66cf44d3-f3d4-4c16-8e92-6027906de5a8 '!=' 66cf44d3-f3d4-4c16-8e92-6027906de5a8 ']' 00:13:02.367 11:55:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 91257 00:13:02.367 11:55:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 91257 ']' 00:13:02.367 11:55:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 91257 00:13:02.367 11:55:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:02.367 11:55:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:02.367 11:55:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91257 00:13:02.367 11:56:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:02.367 11:56:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:02.367 11:56:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91257' 00:13:02.367 killing process with pid 91257 00:13:02.367 11:56:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 91257 00:13:02.367 [2024-12-06 11:56:00.025123] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:02.367 [2024-12-06 11:56:00.025260] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:02.367 11:56:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 91257 00:13:02.367 [2024-12-06 11:56:00.025346] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:02.367 [2024-12-06 11:56:00.025364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:13:02.367 [2024-12-06 11:56:00.059374] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:02.627 11:56:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:02.627 00:13:02.627 real 0m6.502s 00:13:02.627 user 0m10.847s 00:13:02.627 sys 0m1.390s 00:13:02.627 11:56:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:02.627 11:56:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.627 ************************************ 00:13:02.627 END TEST raid5f_superblock_test 00:13:02.627 ************************************ 00:13:02.627 11:56:00 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:13:02.627 11:56:00 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:13:02.627 11:56:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:02.627 11:56:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:02.627 11:56:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:02.888 ************************************ 00:13:02.888 START TEST raid5f_rebuild_test 00:13:02.888 ************************************ 00:13:02.888 11:56:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:13:02.888 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:02.888 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:02.888 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:02.888 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:02.888 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:02.888 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:02.888 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:02.888 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:02.888 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:02.889 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:02.889 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:02.889 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:02.889 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:02.889 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:02.889 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:02.889 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:02.889 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:02.889 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:02.889 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:02.889 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:02.889 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:02.889 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:02.889 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:02.889 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:02.889 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:02.889 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:02.889 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:02.889 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:02.889 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=91684 00:13:02.889 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:02.889 11:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 91684 00:13:02.889 11:56:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 91684 ']' 00:13:02.889 11:56:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.889 11:56:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:02.889 11:56:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.889 11:56:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:02.889 11:56:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.889 [2024-12-06 11:56:00.490095] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:02.889 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:02.889 Zero copy mechanism will not be used. 00:13:02.889 [2024-12-06 11:56:00.490302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91684 ] 00:13:02.889 [2024-12-06 11:56:00.637739] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.149 [2024-12-06 11:56:00.684346] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.149 [2024-12-06 11:56:00.727666] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:03.149 [2024-12-06 11:56:00.727701] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.720 BaseBdev1_malloc 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.720 [2024-12-06 11:56:01.314626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:03.720 [2024-12-06 11:56:01.314688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.720 [2024-12-06 11:56:01.314710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:03.720 [2024-12-06 11:56:01.314724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.720 [2024-12-06 11:56:01.316813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.720 [2024-12-06 11:56:01.316852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:03.720 BaseBdev1 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.720 BaseBdev2_malloc 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.720 [2024-12-06 11:56:01.358932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:03.720 [2024-12-06 11:56:01.359025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.720 [2024-12-06 11:56:01.359068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:03.720 [2024-12-06 11:56:01.359090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.720 [2024-12-06 11:56:01.363525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.720 [2024-12-06 11:56:01.363576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:03.720 BaseBdev2 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.720 BaseBdev3_malloc 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.720 [2024-12-06 11:56:01.389333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:03.720 [2024-12-06 11:56:01.389384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.720 [2024-12-06 11:56:01.389409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:03.720 [2024-12-06 11:56:01.389417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.720 [2024-12-06 11:56:01.391423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.720 [2024-12-06 11:56:01.391458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:03.720 BaseBdev3 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.720 spare_malloc 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.720 spare_delay 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.720 [2024-12-06 11:56:01.429886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:03.720 [2024-12-06 11:56:01.429936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.720 [2024-12-06 11:56:01.429959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:03.720 [2024-12-06 11:56:01.429967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.720 [2024-12-06 11:56:01.432027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.720 [2024-12-06 11:56:01.432062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:03.720 spare 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.720 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.721 [2024-12-06 11:56:01.441915] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:03.721 [2024-12-06 11:56:01.443699] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:03.721 [2024-12-06 11:56:01.443851] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:03.721 [2024-12-06 11:56:01.443928] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:03.721 [2024-12-06 11:56:01.443939] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:03.721 [2024-12-06 11:56:01.444189] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:03.721 [2024-12-06 11:56:01.444598] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:03.721 [2024-12-06 11:56:01.444618] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:03.721 [2024-12-06 11:56:01.444739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.721 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.721 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:03.721 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.721 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.721 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:03.721 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.721 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:03.721 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.721 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.721 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.721 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.721 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.721 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.721 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.721 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.721 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.982 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.982 "name": "raid_bdev1", 00:13:03.982 "uuid": "2c01d6f6-c8be-4a45-b9a3-0aa8069f7a09", 00:13:03.982 "strip_size_kb": 64, 00:13:03.982 "state": "online", 00:13:03.982 "raid_level": "raid5f", 00:13:03.982 "superblock": false, 00:13:03.982 "num_base_bdevs": 3, 00:13:03.982 "num_base_bdevs_discovered": 3, 00:13:03.982 "num_base_bdevs_operational": 3, 00:13:03.982 "base_bdevs_list": [ 00:13:03.982 { 00:13:03.982 "name": "BaseBdev1", 00:13:03.982 "uuid": "40af1b11-4078-50c6-9e2a-83b45c5385c5", 00:13:03.982 "is_configured": true, 00:13:03.982 "data_offset": 0, 00:13:03.982 "data_size": 65536 00:13:03.982 }, 00:13:03.982 { 00:13:03.982 "name": "BaseBdev2", 00:13:03.982 "uuid": "72a46907-3058-59aa-8f00-aa68b70fc061", 00:13:03.982 "is_configured": true, 00:13:03.982 "data_offset": 0, 00:13:03.982 "data_size": 65536 00:13:03.982 }, 00:13:03.982 { 00:13:03.982 "name": "BaseBdev3", 00:13:03.982 "uuid": "6f2eebef-8183-5999-a321-904b0aa1bec9", 00:13:03.982 "is_configured": true, 00:13:03.982 "data_offset": 0, 00:13:03.982 "data_size": 65536 00:13:03.982 } 00:13:03.982 ] 00:13:03.982 }' 00:13:03.982 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.982 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.243 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:04.243 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:04.243 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.244 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.244 [2024-12-06 11:56:01.889541] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:04.244 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.244 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:13:04.244 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.244 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.244 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.244 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:04.244 11:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.244 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:04.244 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:04.244 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:04.244 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:04.244 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:04.244 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:04.244 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:04.244 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:04.244 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:04.244 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:04.244 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:04.244 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:04.244 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:04.244 11:56:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:04.504 [2024-12-06 11:56:02.160879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:13:04.504 /dev/nbd0 00:13:04.504 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:04.504 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:04.504 11:56:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:04.504 11:56:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:04.504 11:56:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:04.504 11:56:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:04.504 11:56:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:04.504 11:56:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:04.504 11:56:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:04.504 11:56:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:04.504 11:56:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.504 1+0 records in 00:13:04.504 1+0 records out 00:13:04.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000497133 s, 8.2 MB/s 00:13:04.504 11:56:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.504 11:56:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:04.504 11:56:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.504 11:56:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:04.504 11:56:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:04.504 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:04.504 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:04.504 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:04.504 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:13:04.504 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:13:04.504 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:13:05.074 512+0 records in 00:13:05.074 512+0 records out 00:13:05.074 67108864 bytes (67 MB, 64 MiB) copied, 0.300816 s, 223 MB/s 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:05.074 [2024-12-06 11:56:02.757692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.074 [2024-12-06 11:56:02.777725] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.074 11:56:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.335 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.335 "name": "raid_bdev1", 00:13:05.335 "uuid": "2c01d6f6-c8be-4a45-b9a3-0aa8069f7a09", 00:13:05.335 "strip_size_kb": 64, 00:13:05.335 "state": "online", 00:13:05.335 "raid_level": "raid5f", 00:13:05.335 "superblock": false, 00:13:05.335 "num_base_bdevs": 3, 00:13:05.335 "num_base_bdevs_discovered": 2, 00:13:05.335 "num_base_bdevs_operational": 2, 00:13:05.335 "base_bdevs_list": [ 00:13:05.335 { 00:13:05.335 "name": null, 00:13:05.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.335 "is_configured": false, 00:13:05.335 "data_offset": 0, 00:13:05.335 "data_size": 65536 00:13:05.335 }, 00:13:05.335 { 00:13:05.335 "name": "BaseBdev2", 00:13:05.335 "uuid": "72a46907-3058-59aa-8f00-aa68b70fc061", 00:13:05.335 "is_configured": true, 00:13:05.335 "data_offset": 0, 00:13:05.335 "data_size": 65536 00:13:05.335 }, 00:13:05.335 { 00:13:05.335 "name": "BaseBdev3", 00:13:05.335 "uuid": "6f2eebef-8183-5999-a321-904b0aa1bec9", 00:13:05.335 "is_configured": true, 00:13:05.335 "data_offset": 0, 00:13:05.335 "data_size": 65536 00:13:05.335 } 00:13:05.335 ] 00:13:05.335 }' 00:13:05.335 11:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.335 11:56:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.595 11:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:05.595 11:56:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.595 11:56:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.595 [2024-12-06 11:56:03.225001] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:05.595 [2024-12-06 11:56:03.228848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027cd0 00:13:05.595 [2024-12-06 11:56:03.231066] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:05.595 11:56:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.595 11:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:06.535 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.535 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.535 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.535 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.535 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.535 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.535 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.535 11:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.535 11:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.535 11:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.795 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.795 "name": "raid_bdev1", 00:13:06.795 "uuid": "2c01d6f6-c8be-4a45-b9a3-0aa8069f7a09", 00:13:06.795 "strip_size_kb": 64, 00:13:06.795 "state": "online", 00:13:06.795 "raid_level": "raid5f", 00:13:06.795 "superblock": false, 00:13:06.795 "num_base_bdevs": 3, 00:13:06.795 "num_base_bdevs_discovered": 3, 00:13:06.795 "num_base_bdevs_operational": 3, 00:13:06.795 "process": { 00:13:06.795 "type": "rebuild", 00:13:06.795 "target": "spare", 00:13:06.795 "progress": { 00:13:06.795 "blocks": 20480, 00:13:06.795 "percent": 15 00:13:06.795 } 00:13:06.795 }, 00:13:06.795 "base_bdevs_list": [ 00:13:06.795 { 00:13:06.795 "name": "spare", 00:13:06.795 "uuid": "beab8036-6e8a-58e7-9fde-a6f2ad0051e0", 00:13:06.795 "is_configured": true, 00:13:06.795 "data_offset": 0, 00:13:06.795 "data_size": 65536 00:13:06.795 }, 00:13:06.795 { 00:13:06.795 "name": "BaseBdev2", 00:13:06.795 "uuid": "72a46907-3058-59aa-8f00-aa68b70fc061", 00:13:06.795 "is_configured": true, 00:13:06.795 "data_offset": 0, 00:13:06.795 "data_size": 65536 00:13:06.795 }, 00:13:06.795 { 00:13:06.795 "name": "BaseBdev3", 00:13:06.795 "uuid": "6f2eebef-8183-5999-a321-904b0aa1bec9", 00:13:06.795 "is_configured": true, 00:13:06.795 "data_offset": 0, 00:13:06.795 "data_size": 65536 00:13:06.795 } 00:13:06.795 ] 00:13:06.795 }' 00:13:06.795 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.795 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:06.795 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.795 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.795 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:06.795 11:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.795 11:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.795 [2024-12-06 11:56:04.377783] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:06.795 [2024-12-06 11:56:04.437724] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:06.795 [2024-12-06 11:56:04.437787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.795 [2024-12-06 11:56:04.437803] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:06.795 [2024-12-06 11:56:04.437812] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:06.795 11:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.795 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:06.795 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.795 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.795 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:06.795 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.795 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:06.795 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.795 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.795 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.795 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.796 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.796 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.796 11:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.796 11:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.796 11:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.796 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.796 "name": "raid_bdev1", 00:13:06.796 "uuid": "2c01d6f6-c8be-4a45-b9a3-0aa8069f7a09", 00:13:06.796 "strip_size_kb": 64, 00:13:06.796 "state": "online", 00:13:06.796 "raid_level": "raid5f", 00:13:06.796 "superblock": false, 00:13:06.796 "num_base_bdevs": 3, 00:13:06.796 "num_base_bdevs_discovered": 2, 00:13:06.796 "num_base_bdevs_operational": 2, 00:13:06.796 "base_bdevs_list": [ 00:13:06.796 { 00:13:06.796 "name": null, 00:13:06.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.796 "is_configured": false, 00:13:06.796 "data_offset": 0, 00:13:06.796 "data_size": 65536 00:13:06.796 }, 00:13:06.796 { 00:13:06.796 "name": "BaseBdev2", 00:13:06.796 "uuid": "72a46907-3058-59aa-8f00-aa68b70fc061", 00:13:06.796 "is_configured": true, 00:13:06.796 "data_offset": 0, 00:13:06.796 "data_size": 65536 00:13:06.796 }, 00:13:06.796 { 00:13:06.796 "name": "BaseBdev3", 00:13:06.796 "uuid": "6f2eebef-8183-5999-a321-904b0aa1bec9", 00:13:06.796 "is_configured": true, 00:13:06.796 "data_offset": 0, 00:13:06.796 "data_size": 65536 00:13:06.796 } 00:13:06.796 ] 00:13:06.796 }' 00:13:06.796 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.796 11:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.367 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:07.367 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.367 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:07.367 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:07.367 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.367 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.367 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.367 11:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.367 11:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.367 11:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.367 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.367 "name": "raid_bdev1", 00:13:07.367 "uuid": "2c01d6f6-c8be-4a45-b9a3-0aa8069f7a09", 00:13:07.367 "strip_size_kb": 64, 00:13:07.367 "state": "online", 00:13:07.367 "raid_level": "raid5f", 00:13:07.367 "superblock": false, 00:13:07.367 "num_base_bdevs": 3, 00:13:07.367 "num_base_bdevs_discovered": 2, 00:13:07.367 "num_base_bdevs_operational": 2, 00:13:07.367 "base_bdevs_list": [ 00:13:07.367 { 00:13:07.367 "name": null, 00:13:07.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.367 "is_configured": false, 00:13:07.367 "data_offset": 0, 00:13:07.367 "data_size": 65536 00:13:07.367 }, 00:13:07.367 { 00:13:07.367 "name": "BaseBdev2", 00:13:07.367 "uuid": "72a46907-3058-59aa-8f00-aa68b70fc061", 00:13:07.367 "is_configured": true, 00:13:07.367 "data_offset": 0, 00:13:07.367 "data_size": 65536 00:13:07.367 }, 00:13:07.367 { 00:13:07.367 "name": "BaseBdev3", 00:13:07.367 "uuid": "6f2eebef-8183-5999-a321-904b0aa1bec9", 00:13:07.367 "is_configured": true, 00:13:07.367 "data_offset": 0, 00:13:07.367 "data_size": 65536 00:13:07.367 } 00:13:07.367 ] 00:13:07.367 }' 00:13:07.367 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.367 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:07.367 11:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.367 11:56:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:07.367 11:56:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:07.367 11:56:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.367 11:56:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.367 [2024-12-06 11:56:05.022250] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:07.367 [2024-12-06 11:56:05.025512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:13:07.367 [2024-12-06 11:56:05.027569] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:07.367 11:56:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.367 11:56:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:08.304 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.304 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.304 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.304 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.304 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.304 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.304 11:56:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.304 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.304 11:56:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.304 11:56:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.564 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.564 "name": "raid_bdev1", 00:13:08.564 "uuid": "2c01d6f6-c8be-4a45-b9a3-0aa8069f7a09", 00:13:08.564 "strip_size_kb": 64, 00:13:08.564 "state": "online", 00:13:08.564 "raid_level": "raid5f", 00:13:08.564 "superblock": false, 00:13:08.564 "num_base_bdevs": 3, 00:13:08.564 "num_base_bdevs_discovered": 3, 00:13:08.564 "num_base_bdevs_operational": 3, 00:13:08.564 "process": { 00:13:08.564 "type": "rebuild", 00:13:08.564 "target": "spare", 00:13:08.564 "progress": { 00:13:08.564 "blocks": 20480, 00:13:08.564 "percent": 15 00:13:08.564 } 00:13:08.564 }, 00:13:08.564 "base_bdevs_list": [ 00:13:08.564 { 00:13:08.564 "name": "spare", 00:13:08.564 "uuid": "beab8036-6e8a-58e7-9fde-a6f2ad0051e0", 00:13:08.564 "is_configured": true, 00:13:08.564 "data_offset": 0, 00:13:08.564 "data_size": 65536 00:13:08.564 }, 00:13:08.564 { 00:13:08.564 "name": "BaseBdev2", 00:13:08.564 "uuid": "72a46907-3058-59aa-8f00-aa68b70fc061", 00:13:08.564 "is_configured": true, 00:13:08.564 "data_offset": 0, 00:13:08.564 "data_size": 65536 00:13:08.564 }, 00:13:08.564 { 00:13:08.564 "name": "BaseBdev3", 00:13:08.564 "uuid": "6f2eebef-8183-5999-a321-904b0aa1bec9", 00:13:08.564 "is_configured": true, 00:13:08.564 "data_offset": 0, 00:13:08.564 "data_size": 65536 00:13:08.564 } 00:13:08.564 ] 00:13:08.564 }' 00:13:08.564 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.564 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:08.564 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.564 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.564 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:08.564 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:13:08.564 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:08.564 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=440 00:13:08.564 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:08.564 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.564 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.564 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.564 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.564 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.564 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.564 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.564 11:56:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.564 11:56:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.564 11:56:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.564 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.564 "name": "raid_bdev1", 00:13:08.564 "uuid": "2c01d6f6-c8be-4a45-b9a3-0aa8069f7a09", 00:13:08.564 "strip_size_kb": 64, 00:13:08.564 "state": "online", 00:13:08.564 "raid_level": "raid5f", 00:13:08.564 "superblock": false, 00:13:08.564 "num_base_bdevs": 3, 00:13:08.564 "num_base_bdevs_discovered": 3, 00:13:08.564 "num_base_bdevs_operational": 3, 00:13:08.564 "process": { 00:13:08.564 "type": "rebuild", 00:13:08.564 "target": "spare", 00:13:08.564 "progress": { 00:13:08.564 "blocks": 22528, 00:13:08.564 "percent": 17 00:13:08.564 } 00:13:08.564 }, 00:13:08.564 "base_bdevs_list": [ 00:13:08.564 { 00:13:08.564 "name": "spare", 00:13:08.564 "uuid": "beab8036-6e8a-58e7-9fde-a6f2ad0051e0", 00:13:08.564 "is_configured": true, 00:13:08.564 "data_offset": 0, 00:13:08.564 "data_size": 65536 00:13:08.564 }, 00:13:08.564 { 00:13:08.564 "name": "BaseBdev2", 00:13:08.564 "uuid": "72a46907-3058-59aa-8f00-aa68b70fc061", 00:13:08.564 "is_configured": true, 00:13:08.564 "data_offset": 0, 00:13:08.564 "data_size": 65536 00:13:08.564 }, 00:13:08.564 { 00:13:08.564 "name": "BaseBdev3", 00:13:08.564 "uuid": "6f2eebef-8183-5999-a321-904b0aa1bec9", 00:13:08.564 "is_configured": true, 00:13:08.564 "data_offset": 0, 00:13:08.564 "data_size": 65536 00:13:08.564 } 00:13:08.564 ] 00:13:08.564 }' 00:13:08.564 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.564 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:08.564 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.564 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.564 11:56:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:09.946 11:56:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:09.946 11:56:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:09.946 11:56:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.946 11:56:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:09.946 11:56:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:09.946 11:56:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.946 11:56:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.946 11:56:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.946 11:56:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.946 11:56:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.946 11:56:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.946 11:56:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.946 "name": "raid_bdev1", 00:13:09.946 "uuid": "2c01d6f6-c8be-4a45-b9a3-0aa8069f7a09", 00:13:09.946 "strip_size_kb": 64, 00:13:09.946 "state": "online", 00:13:09.946 "raid_level": "raid5f", 00:13:09.946 "superblock": false, 00:13:09.946 "num_base_bdevs": 3, 00:13:09.946 "num_base_bdevs_discovered": 3, 00:13:09.946 "num_base_bdevs_operational": 3, 00:13:09.946 "process": { 00:13:09.946 "type": "rebuild", 00:13:09.946 "target": "spare", 00:13:09.947 "progress": { 00:13:09.947 "blocks": 45056, 00:13:09.947 "percent": 34 00:13:09.947 } 00:13:09.947 }, 00:13:09.947 "base_bdevs_list": [ 00:13:09.947 { 00:13:09.947 "name": "spare", 00:13:09.947 "uuid": "beab8036-6e8a-58e7-9fde-a6f2ad0051e0", 00:13:09.947 "is_configured": true, 00:13:09.947 "data_offset": 0, 00:13:09.947 "data_size": 65536 00:13:09.947 }, 00:13:09.947 { 00:13:09.947 "name": "BaseBdev2", 00:13:09.947 "uuid": "72a46907-3058-59aa-8f00-aa68b70fc061", 00:13:09.947 "is_configured": true, 00:13:09.947 "data_offset": 0, 00:13:09.947 "data_size": 65536 00:13:09.947 }, 00:13:09.947 { 00:13:09.947 "name": "BaseBdev3", 00:13:09.947 "uuid": "6f2eebef-8183-5999-a321-904b0aa1bec9", 00:13:09.947 "is_configured": true, 00:13:09.947 "data_offset": 0, 00:13:09.947 "data_size": 65536 00:13:09.947 } 00:13:09.947 ] 00:13:09.947 }' 00:13:09.947 11:56:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.947 11:56:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:09.947 11:56:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.947 11:56:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:09.947 11:56:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:10.888 11:56:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:10.888 11:56:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:10.888 11:56:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.888 11:56:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:10.888 11:56:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:10.888 11:56:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.888 11:56:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.888 11:56:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.888 11:56:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.888 11:56:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.888 11:56:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.888 11:56:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.888 "name": "raid_bdev1", 00:13:10.888 "uuid": "2c01d6f6-c8be-4a45-b9a3-0aa8069f7a09", 00:13:10.888 "strip_size_kb": 64, 00:13:10.888 "state": "online", 00:13:10.888 "raid_level": "raid5f", 00:13:10.888 "superblock": false, 00:13:10.888 "num_base_bdevs": 3, 00:13:10.888 "num_base_bdevs_discovered": 3, 00:13:10.888 "num_base_bdevs_operational": 3, 00:13:10.888 "process": { 00:13:10.888 "type": "rebuild", 00:13:10.888 "target": "spare", 00:13:10.888 "progress": { 00:13:10.888 "blocks": 69632, 00:13:10.888 "percent": 53 00:13:10.888 } 00:13:10.888 }, 00:13:10.888 "base_bdevs_list": [ 00:13:10.888 { 00:13:10.888 "name": "spare", 00:13:10.888 "uuid": "beab8036-6e8a-58e7-9fde-a6f2ad0051e0", 00:13:10.888 "is_configured": true, 00:13:10.888 "data_offset": 0, 00:13:10.888 "data_size": 65536 00:13:10.888 }, 00:13:10.888 { 00:13:10.888 "name": "BaseBdev2", 00:13:10.888 "uuid": "72a46907-3058-59aa-8f00-aa68b70fc061", 00:13:10.888 "is_configured": true, 00:13:10.888 "data_offset": 0, 00:13:10.888 "data_size": 65536 00:13:10.888 }, 00:13:10.888 { 00:13:10.888 "name": "BaseBdev3", 00:13:10.888 "uuid": "6f2eebef-8183-5999-a321-904b0aa1bec9", 00:13:10.888 "is_configured": true, 00:13:10.888 "data_offset": 0, 00:13:10.888 "data_size": 65536 00:13:10.888 } 00:13:10.888 ] 00:13:10.888 }' 00:13:10.888 11:56:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.888 11:56:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:10.888 11:56:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.888 11:56:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:10.888 11:56:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:12.278 11:56:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:12.278 11:56:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.278 11:56:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.278 11:56:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.278 11:56:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.278 11:56:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.278 11:56:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.278 11:56:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.278 11:56:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.278 11:56:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.278 11:56:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.278 11:56:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.278 "name": "raid_bdev1", 00:13:12.278 "uuid": "2c01d6f6-c8be-4a45-b9a3-0aa8069f7a09", 00:13:12.278 "strip_size_kb": 64, 00:13:12.278 "state": "online", 00:13:12.278 "raid_level": "raid5f", 00:13:12.278 "superblock": false, 00:13:12.278 "num_base_bdevs": 3, 00:13:12.278 "num_base_bdevs_discovered": 3, 00:13:12.278 "num_base_bdevs_operational": 3, 00:13:12.278 "process": { 00:13:12.278 "type": "rebuild", 00:13:12.278 "target": "spare", 00:13:12.278 "progress": { 00:13:12.278 "blocks": 92160, 00:13:12.278 "percent": 70 00:13:12.278 } 00:13:12.278 }, 00:13:12.278 "base_bdevs_list": [ 00:13:12.278 { 00:13:12.278 "name": "spare", 00:13:12.278 "uuid": "beab8036-6e8a-58e7-9fde-a6f2ad0051e0", 00:13:12.278 "is_configured": true, 00:13:12.278 "data_offset": 0, 00:13:12.278 "data_size": 65536 00:13:12.278 }, 00:13:12.278 { 00:13:12.278 "name": "BaseBdev2", 00:13:12.278 "uuid": "72a46907-3058-59aa-8f00-aa68b70fc061", 00:13:12.278 "is_configured": true, 00:13:12.278 "data_offset": 0, 00:13:12.278 "data_size": 65536 00:13:12.278 }, 00:13:12.278 { 00:13:12.278 "name": "BaseBdev3", 00:13:12.278 "uuid": "6f2eebef-8183-5999-a321-904b0aa1bec9", 00:13:12.278 "is_configured": true, 00:13:12.278 "data_offset": 0, 00:13:12.278 "data_size": 65536 00:13:12.278 } 00:13:12.278 ] 00:13:12.278 }' 00:13:12.278 11:56:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.278 11:56:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:12.278 11:56:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.278 11:56:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:12.278 11:56:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:13.217 11:56:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:13.217 11:56:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.217 11:56:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.217 11:56:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.217 11:56:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.217 11:56:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.217 11:56:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.217 11:56:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.217 11:56:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.217 11:56:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.217 11:56:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.217 11:56:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.217 "name": "raid_bdev1", 00:13:13.217 "uuid": "2c01d6f6-c8be-4a45-b9a3-0aa8069f7a09", 00:13:13.217 "strip_size_kb": 64, 00:13:13.217 "state": "online", 00:13:13.217 "raid_level": "raid5f", 00:13:13.217 "superblock": false, 00:13:13.217 "num_base_bdevs": 3, 00:13:13.217 "num_base_bdevs_discovered": 3, 00:13:13.217 "num_base_bdevs_operational": 3, 00:13:13.217 "process": { 00:13:13.217 "type": "rebuild", 00:13:13.217 "target": "spare", 00:13:13.217 "progress": { 00:13:13.217 "blocks": 116736, 00:13:13.217 "percent": 89 00:13:13.217 } 00:13:13.217 }, 00:13:13.217 "base_bdevs_list": [ 00:13:13.217 { 00:13:13.217 "name": "spare", 00:13:13.217 "uuid": "beab8036-6e8a-58e7-9fde-a6f2ad0051e0", 00:13:13.217 "is_configured": true, 00:13:13.217 "data_offset": 0, 00:13:13.217 "data_size": 65536 00:13:13.217 }, 00:13:13.217 { 00:13:13.217 "name": "BaseBdev2", 00:13:13.217 "uuid": "72a46907-3058-59aa-8f00-aa68b70fc061", 00:13:13.217 "is_configured": true, 00:13:13.217 "data_offset": 0, 00:13:13.217 "data_size": 65536 00:13:13.217 }, 00:13:13.217 { 00:13:13.217 "name": "BaseBdev3", 00:13:13.217 "uuid": "6f2eebef-8183-5999-a321-904b0aa1bec9", 00:13:13.217 "is_configured": true, 00:13:13.217 "data_offset": 0, 00:13:13.217 "data_size": 65536 00:13:13.217 } 00:13:13.217 ] 00:13:13.217 }' 00:13:13.217 11:56:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.217 11:56:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:13.217 11:56:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.217 11:56:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:13.217 11:56:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:13.785 [2024-12-06 11:56:11.459919] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:13.785 [2024-12-06 11:56:11.460052] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:13.785 [2024-12-06 11:56:11.460130] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.354 11:56:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:14.354 11:56:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.354 11:56:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.354 11:56:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.354 11:56:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.354 11:56:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.354 11:56:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.354 11:56:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.354 11:56:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.354 11:56:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.354 11:56:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.354 11:56:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.354 "name": "raid_bdev1", 00:13:14.354 "uuid": "2c01d6f6-c8be-4a45-b9a3-0aa8069f7a09", 00:13:14.354 "strip_size_kb": 64, 00:13:14.354 "state": "online", 00:13:14.354 "raid_level": "raid5f", 00:13:14.354 "superblock": false, 00:13:14.354 "num_base_bdevs": 3, 00:13:14.354 "num_base_bdevs_discovered": 3, 00:13:14.354 "num_base_bdevs_operational": 3, 00:13:14.354 "base_bdevs_list": [ 00:13:14.354 { 00:13:14.354 "name": "spare", 00:13:14.354 "uuid": "beab8036-6e8a-58e7-9fde-a6f2ad0051e0", 00:13:14.354 "is_configured": true, 00:13:14.354 "data_offset": 0, 00:13:14.354 "data_size": 65536 00:13:14.354 }, 00:13:14.354 { 00:13:14.354 "name": "BaseBdev2", 00:13:14.354 "uuid": "72a46907-3058-59aa-8f00-aa68b70fc061", 00:13:14.354 "is_configured": true, 00:13:14.354 "data_offset": 0, 00:13:14.354 "data_size": 65536 00:13:14.354 }, 00:13:14.354 { 00:13:14.354 "name": "BaseBdev3", 00:13:14.354 "uuid": "6f2eebef-8183-5999-a321-904b0aa1bec9", 00:13:14.354 "is_configured": true, 00:13:14.354 "data_offset": 0, 00:13:14.354 "data_size": 65536 00:13:14.354 } 00:13:14.354 ] 00:13:14.354 }' 00:13:14.354 11:56:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.354 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:14.354 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.354 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:14.354 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:14.354 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:14.354 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.354 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:14.354 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:14.354 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.354 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.354 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.354 11:56:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.354 11:56:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.354 11:56:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.354 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.354 "name": "raid_bdev1", 00:13:14.354 "uuid": "2c01d6f6-c8be-4a45-b9a3-0aa8069f7a09", 00:13:14.354 "strip_size_kb": 64, 00:13:14.354 "state": "online", 00:13:14.354 "raid_level": "raid5f", 00:13:14.354 "superblock": false, 00:13:14.354 "num_base_bdevs": 3, 00:13:14.354 "num_base_bdevs_discovered": 3, 00:13:14.354 "num_base_bdevs_operational": 3, 00:13:14.354 "base_bdevs_list": [ 00:13:14.354 { 00:13:14.354 "name": "spare", 00:13:14.354 "uuid": "beab8036-6e8a-58e7-9fde-a6f2ad0051e0", 00:13:14.354 "is_configured": true, 00:13:14.354 "data_offset": 0, 00:13:14.355 "data_size": 65536 00:13:14.355 }, 00:13:14.355 { 00:13:14.355 "name": "BaseBdev2", 00:13:14.355 "uuid": "72a46907-3058-59aa-8f00-aa68b70fc061", 00:13:14.355 "is_configured": true, 00:13:14.355 "data_offset": 0, 00:13:14.355 "data_size": 65536 00:13:14.355 }, 00:13:14.355 { 00:13:14.355 "name": "BaseBdev3", 00:13:14.355 "uuid": "6f2eebef-8183-5999-a321-904b0aa1bec9", 00:13:14.355 "is_configured": true, 00:13:14.355 "data_offset": 0, 00:13:14.355 "data_size": 65536 00:13:14.355 } 00:13:14.355 ] 00:13:14.355 }' 00:13:14.355 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.614 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:14.615 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.615 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:14.615 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:14.615 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.615 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.615 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:14.615 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.615 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.615 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.615 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.615 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.615 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.615 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.615 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.615 11:56:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.615 11:56:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.615 11:56:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.615 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.615 "name": "raid_bdev1", 00:13:14.615 "uuid": "2c01d6f6-c8be-4a45-b9a3-0aa8069f7a09", 00:13:14.615 "strip_size_kb": 64, 00:13:14.615 "state": "online", 00:13:14.615 "raid_level": "raid5f", 00:13:14.615 "superblock": false, 00:13:14.615 "num_base_bdevs": 3, 00:13:14.615 "num_base_bdevs_discovered": 3, 00:13:14.615 "num_base_bdevs_operational": 3, 00:13:14.615 "base_bdevs_list": [ 00:13:14.615 { 00:13:14.615 "name": "spare", 00:13:14.615 "uuid": "beab8036-6e8a-58e7-9fde-a6f2ad0051e0", 00:13:14.615 "is_configured": true, 00:13:14.615 "data_offset": 0, 00:13:14.615 "data_size": 65536 00:13:14.615 }, 00:13:14.615 { 00:13:14.615 "name": "BaseBdev2", 00:13:14.615 "uuid": "72a46907-3058-59aa-8f00-aa68b70fc061", 00:13:14.615 "is_configured": true, 00:13:14.615 "data_offset": 0, 00:13:14.615 "data_size": 65536 00:13:14.615 }, 00:13:14.615 { 00:13:14.615 "name": "BaseBdev3", 00:13:14.615 "uuid": "6f2eebef-8183-5999-a321-904b0aa1bec9", 00:13:14.615 "is_configured": true, 00:13:14.615 "data_offset": 0, 00:13:14.615 "data_size": 65536 00:13:14.615 } 00:13:14.615 ] 00:13:14.615 }' 00:13:14.615 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.615 11:56:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.874 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:14.874 11:56:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.874 11:56:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.874 [2024-12-06 11:56:12.623177] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:14.874 [2024-12-06 11:56:12.623289] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:14.874 [2024-12-06 11:56:12.623391] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:14.874 [2024-12-06 11:56:12.623498] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:14.874 [2024-12-06 11:56:12.623559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:14.874 11:56:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.133 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.133 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:15.133 11:56:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.133 11:56:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.133 11:56:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.133 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:15.133 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:15.133 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:15.133 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:15.133 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:15.133 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:15.133 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:15.133 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:15.133 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:15.133 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:15.133 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:15.133 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:15.133 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:15.133 /dev/nbd0 00:13:15.393 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:15.393 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:15.393 11:56:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:15.393 11:56:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:15.393 11:56:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:15.393 11:56:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:15.394 11:56:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:15.394 11:56:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:15.394 11:56:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:15.394 11:56:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:15.394 11:56:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:15.394 1+0 records in 00:13:15.394 1+0 records out 00:13:15.394 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379187 s, 10.8 MB/s 00:13:15.394 11:56:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.394 11:56:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:15.394 11:56:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.394 11:56:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:15.394 11:56:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:15.394 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:15.394 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:15.394 11:56:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:15.394 /dev/nbd1 00:13:15.653 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:15.653 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:15.653 11:56:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:15.653 11:56:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:15.653 11:56:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:15.653 11:56:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:15.653 11:56:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:15.653 11:56:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:15.653 11:56:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:15.654 11:56:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:15.654 11:56:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:15.654 1+0 records in 00:13:15.654 1+0 records out 00:13:15.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000592122 s, 6.9 MB/s 00:13:15.654 11:56:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.654 11:56:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:15.654 11:56:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.654 11:56:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:15.654 11:56:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:15.654 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:15.654 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:15.654 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:15.654 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:15.654 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:15.654 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:15.654 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:15.654 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:15.654 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.654 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:15.912 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:15.913 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:15.913 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:15.913 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:15.913 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:15.913 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:15.913 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:15.913 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:15.913 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.913 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:16.172 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:16.172 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:16.172 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:16.172 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.172 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.172 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:16.172 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:16.172 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.172 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:16.172 11:56:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 91684 00:13:16.172 11:56:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 91684 ']' 00:13:16.172 11:56:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 91684 00:13:16.172 11:56:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:13:16.172 11:56:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:16.172 11:56:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91684 00:13:16.172 11:56:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:16.172 11:56:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:16.172 killing process with pid 91684 00:13:16.172 Received shutdown signal, test time was about 60.000000 seconds 00:13:16.172 00:13:16.172 Latency(us) 00:13:16.172 [2024-12-06T11:56:13.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.172 [2024-12-06T11:56:13.930Z] =================================================================================================================== 00:13:16.172 [2024-12-06T11:56:13.930Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:16.172 11:56:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91684' 00:13:16.172 11:56:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 91684 00:13:16.172 [2024-12-06 11:56:13.747890] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:16.172 11:56:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 91684 00:13:16.172 [2024-12-06 11:56:13.789407] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:16.432 11:56:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:16.432 00:13:16.432 real 0m13.631s 00:13:16.433 user 0m16.981s 00:13:16.433 sys 0m2.055s 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:16.433 ************************************ 00:13:16.433 END TEST raid5f_rebuild_test 00:13:16.433 ************************************ 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.433 11:56:14 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:13:16.433 11:56:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:16.433 11:56:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:16.433 11:56:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:16.433 ************************************ 00:13:16.433 START TEST raid5f_rebuild_test_sb 00:13:16.433 ************************************ 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=92107 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 92107 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 92107 ']' 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:16.433 11:56:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.693 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:16.693 Zero copy mechanism will not be used. 00:13:16.694 [2024-12-06 11:56:14.197906] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:16.694 [2024-12-06 11:56:14.198031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92107 ] 00:13:16.694 [2024-12-06 11:56:14.344207] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.694 [2024-12-06 11:56:14.388973] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.694 [2024-12-06 11:56:14.431708] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.694 [2024-12-06 11:56:14.431829] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:17.264 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:17.264 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:17.264 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:17.264 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:17.264 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.264 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.539 BaseBdev1_malloc 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.539 [2024-12-06 11:56:15.030296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:17.539 [2024-12-06 11:56:15.030360] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.539 [2024-12-06 11:56:15.030387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:17.539 [2024-12-06 11:56:15.030401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.539 [2024-12-06 11:56:15.032481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.539 [2024-12-06 11:56:15.032519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:17.539 BaseBdev1 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.539 BaseBdev2_malloc 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.539 [2024-12-06 11:56:15.068023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:17.539 [2024-12-06 11:56:15.068152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.539 [2024-12-06 11:56:15.068178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:17.539 [2024-12-06 11:56:15.068187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.539 [2024-12-06 11:56:15.070272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.539 [2024-12-06 11:56:15.070301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:17.539 BaseBdev2 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.539 BaseBdev3_malloc 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.539 [2024-12-06 11:56:15.096618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:17.539 [2024-12-06 11:56:15.096676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.539 [2024-12-06 11:56:15.096703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:17.539 [2024-12-06 11:56:15.096712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.539 [2024-12-06 11:56:15.098686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.539 [2024-12-06 11:56:15.098779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:17.539 BaseBdev3 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.539 spare_malloc 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.539 spare_delay 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.539 [2024-12-06 11:56:15.137340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:17.539 [2024-12-06 11:56:15.137391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.539 [2024-12-06 11:56:15.137417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:17.539 [2024-12-06 11:56:15.137425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.539 [2024-12-06 11:56:15.139417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.539 [2024-12-06 11:56:15.139511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:17.539 spare 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.539 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.539 [2024-12-06 11:56:15.149389] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:17.540 [2024-12-06 11:56:15.151179] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:17.540 [2024-12-06 11:56:15.151236] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:17.540 [2024-12-06 11:56:15.151395] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:17.540 [2024-12-06 11:56:15.151409] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:17.540 [2024-12-06 11:56:15.151664] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:17.540 [2024-12-06 11:56:15.152063] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:17.540 [2024-12-06 11:56:15.152075] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:17.540 [2024-12-06 11:56:15.152197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.540 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.540 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:17.540 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.540 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.540 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:17.540 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.540 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:17.540 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.540 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.540 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.540 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.540 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.540 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.540 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.540 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.540 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.540 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.540 "name": "raid_bdev1", 00:13:17.540 "uuid": "d7bdb2a6-4b03-4e04-b06a-cdc5c7ee1a63", 00:13:17.540 "strip_size_kb": 64, 00:13:17.540 "state": "online", 00:13:17.540 "raid_level": "raid5f", 00:13:17.540 "superblock": true, 00:13:17.540 "num_base_bdevs": 3, 00:13:17.540 "num_base_bdevs_discovered": 3, 00:13:17.540 "num_base_bdevs_operational": 3, 00:13:17.540 "base_bdevs_list": [ 00:13:17.540 { 00:13:17.540 "name": "BaseBdev1", 00:13:17.540 "uuid": "1ff970ba-0cb5-5063-a0b5-0aa398aec8a6", 00:13:17.540 "is_configured": true, 00:13:17.540 "data_offset": 2048, 00:13:17.540 "data_size": 63488 00:13:17.540 }, 00:13:17.540 { 00:13:17.540 "name": "BaseBdev2", 00:13:17.540 "uuid": "c2aae351-d5d5-5f0f-b430-4644fa9b4f3f", 00:13:17.540 "is_configured": true, 00:13:17.540 "data_offset": 2048, 00:13:17.540 "data_size": 63488 00:13:17.540 }, 00:13:17.540 { 00:13:17.540 "name": "BaseBdev3", 00:13:17.540 "uuid": "3d588aa4-b70d-5c0e-9a43-ab240792f35f", 00:13:17.540 "is_configured": true, 00:13:17.540 "data_offset": 2048, 00:13:17.540 "data_size": 63488 00:13:17.540 } 00:13:17.540 ] 00:13:17.540 }' 00:13:17.540 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.540 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.108 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:18.108 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:18.108 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.108 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.109 [2024-12-06 11:56:15.569102] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:18.109 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.109 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:13:18.109 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:18.109 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.109 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.109 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.109 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.109 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:18.109 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:18.109 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:18.109 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:18.109 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:18.109 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:18.109 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:18.109 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:18.109 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:18.109 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:18.109 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:18.109 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:18.109 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:18.109 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:18.109 [2024-12-06 11:56:15.812541] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:13:18.109 /dev/nbd0 00:13:18.109 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:18.109 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:18.109 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:18.109 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:18.109 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:18.109 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:18.109 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:18.368 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:18.368 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:18.368 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:18.368 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:18.368 1+0 records in 00:13:18.368 1+0 records out 00:13:18.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473027 s, 8.7 MB/s 00:13:18.368 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.368 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:18.368 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.368 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:18.368 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:18.368 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:18.368 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:18.368 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:18.368 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:13:18.368 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:13:18.368 11:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:13:18.628 496+0 records in 00:13:18.628 496+0 records out 00:13:18.628 65011712 bytes (65 MB, 62 MiB) copied, 0.273239 s, 238 MB/s 00:13:18.628 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:18.628 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:18.628 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:18.628 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:18.628 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:18.628 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.628 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:18.628 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:18.628 [2024-12-06 11:56:16.381223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.887 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:18.887 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:18.887 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.887 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.887 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:18.887 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:18.887 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.887 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:18.887 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.887 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.887 [2024-12-06 11:56:16.396286] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:18.887 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.887 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:18.887 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.887 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.887 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:18.887 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.887 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:18.887 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.887 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.887 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.887 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.887 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.887 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.887 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.887 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.887 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.887 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.887 "name": "raid_bdev1", 00:13:18.887 "uuid": "d7bdb2a6-4b03-4e04-b06a-cdc5c7ee1a63", 00:13:18.887 "strip_size_kb": 64, 00:13:18.887 "state": "online", 00:13:18.887 "raid_level": "raid5f", 00:13:18.887 "superblock": true, 00:13:18.887 "num_base_bdevs": 3, 00:13:18.887 "num_base_bdevs_discovered": 2, 00:13:18.887 "num_base_bdevs_operational": 2, 00:13:18.887 "base_bdevs_list": [ 00:13:18.887 { 00:13:18.887 "name": null, 00:13:18.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.887 "is_configured": false, 00:13:18.887 "data_offset": 0, 00:13:18.887 "data_size": 63488 00:13:18.887 }, 00:13:18.887 { 00:13:18.887 "name": "BaseBdev2", 00:13:18.887 "uuid": "c2aae351-d5d5-5f0f-b430-4644fa9b4f3f", 00:13:18.887 "is_configured": true, 00:13:18.887 "data_offset": 2048, 00:13:18.887 "data_size": 63488 00:13:18.887 }, 00:13:18.887 { 00:13:18.887 "name": "BaseBdev3", 00:13:18.887 "uuid": "3d588aa4-b70d-5c0e-9a43-ab240792f35f", 00:13:18.887 "is_configured": true, 00:13:18.887 "data_offset": 2048, 00:13:18.887 "data_size": 63488 00:13:18.887 } 00:13:18.887 ] 00:13:18.887 }' 00:13:18.887 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.887 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.147 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:19.147 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.147 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.147 [2024-12-06 11:56:16.855468] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:19.147 [2024-12-06 11:56:16.859345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000255d0 00:13:19.147 [2024-12-06 11:56:16.861465] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:19.147 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.147 11:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:20.529 11:56:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.529 11:56:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.529 11:56:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.529 11:56:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.529 11:56:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.529 11:56:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.529 11:56:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.529 11:56:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.529 11:56:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.529 11:56:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.529 11:56:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.529 "name": "raid_bdev1", 00:13:20.529 "uuid": "d7bdb2a6-4b03-4e04-b06a-cdc5c7ee1a63", 00:13:20.529 "strip_size_kb": 64, 00:13:20.529 "state": "online", 00:13:20.529 "raid_level": "raid5f", 00:13:20.529 "superblock": true, 00:13:20.529 "num_base_bdevs": 3, 00:13:20.529 "num_base_bdevs_discovered": 3, 00:13:20.529 "num_base_bdevs_operational": 3, 00:13:20.529 "process": { 00:13:20.529 "type": "rebuild", 00:13:20.529 "target": "spare", 00:13:20.529 "progress": { 00:13:20.529 "blocks": 20480, 00:13:20.529 "percent": 16 00:13:20.529 } 00:13:20.529 }, 00:13:20.529 "base_bdevs_list": [ 00:13:20.529 { 00:13:20.529 "name": "spare", 00:13:20.529 "uuid": "7769d4b9-a161-5484-8c4c-c29dc028865e", 00:13:20.529 "is_configured": true, 00:13:20.529 "data_offset": 2048, 00:13:20.529 "data_size": 63488 00:13:20.529 }, 00:13:20.529 { 00:13:20.529 "name": "BaseBdev2", 00:13:20.529 "uuid": "c2aae351-d5d5-5f0f-b430-4644fa9b4f3f", 00:13:20.529 "is_configured": true, 00:13:20.529 "data_offset": 2048, 00:13:20.529 "data_size": 63488 00:13:20.529 }, 00:13:20.529 { 00:13:20.529 "name": "BaseBdev3", 00:13:20.529 "uuid": "3d588aa4-b70d-5c0e-9a43-ab240792f35f", 00:13:20.529 "is_configured": true, 00:13:20.529 "data_offset": 2048, 00:13:20.529 "data_size": 63488 00:13:20.529 } 00:13:20.529 ] 00:13:20.529 }' 00:13:20.529 11:56:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.529 11:56:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.529 11:56:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.529 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.529 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:20.529 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.529 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.529 [2024-12-06 11:56:18.024205] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:20.529 [2024-12-06 11:56:18.068155] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:20.529 [2024-12-06 11:56:18.068211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.529 [2024-12-06 11:56:18.068227] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:20.529 [2024-12-06 11:56:18.068255] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:20.529 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.529 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:20.529 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.529 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.529 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:20.529 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:20.529 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:20.529 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.529 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.529 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.529 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.529 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.529 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.529 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.529 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.529 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.529 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.529 "name": "raid_bdev1", 00:13:20.529 "uuid": "d7bdb2a6-4b03-4e04-b06a-cdc5c7ee1a63", 00:13:20.529 "strip_size_kb": 64, 00:13:20.529 "state": "online", 00:13:20.529 "raid_level": "raid5f", 00:13:20.529 "superblock": true, 00:13:20.529 "num_base_bdevs": 3, 00:13:20.529 "num_base_bdevs_discovered": 2, 00:13:20.529 "num_base_bdevs_operational": 2, 00:13:20.529 "base_bdevs_list": [ 00:13:20.529 { 00:13:20.529 "name": null, 00:13:20.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.529 "is_configured": false, 00:13:20.529 "data_offset": 0, 00:13:20.529 "data_size": 63488 00:13:20.529 }, 00:13:20.529 { 00:13:20.530 "name": "BaseBdev2", 00:13:20.530 "uuid": "c2aae351-d5d5-5f0f-b430-4644fa9b4f3f", 00:13:20.530 "is_configured": true, 00:13:20.530 "data_offset": 2048, 00:13:20.530 "data_size": 63488 00:13:20.530 }, 00:13:20.530 { 00:13:20.530 "name": "BaseBdev3", 00:13:20.530 "uuid": "3d588aa4-b70d-5c0e-9a43-ab240792f35f", 00:13:20.530 "is_configured": true, 00:13:20.530 "data_offset": 2048, 00:13:20.530 "data_size": 63488 00:13:20.530 } 00:13:20.530 ] 00:13:20.530 }' 00:13:20.530 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.530 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.100 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:21.100 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.100 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:21.100 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:21.100 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.100 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.100 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.100 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.100 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.100 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.100 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.100 "name": "raid_bdev1", 00:13:21.100 "uuid": "d7bdb2a6-4b03-4e04-b06a-cdc5c7ee1a63", 00:13:21.100 "strip_size_kb": 64, 00:13:21.100 "state": "online", 00:13:21.100 "raid_level": "raid5f", 00:13:21.100 "superblock": true, 00:13:21.100 "num_base_bdevs": 3, 00:13:21.100 "num_base_bdevs_discovered": 2, 00:13:21.100 "num_base_bdevs_operational": 2, 00:13:21.100 "base_bdevs_list": [ 00:13:21.100 { 00:13:21.100 "name": null, 00:13:21.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.100 "is_configured": false, 00:13:21.100 "data_offset": 0, 00:13:21.100 "data_size": 63488 00:13:21.100 }, 00:13:21.100 { 00:13:21.100 "name": "BaseBdev2", 00:13:21.100 "uuid": "c2aae351-d5d5-5f0f-b430-4644fa9b4f3f", 00:13:21.100 "is_configured": true, 00:13:21.100 "data_offset": 2048, 00:13:21.100 "data_size": 63488 00:13:21.100 }, 00:13:21.100 { 00:13:21.100 "name": "BaseBdev3", 00:13:21.100 "uuid": "3d588aa4-b70d-5c0e-9a43-ab240792f35f", 00:13:21.100 "is_configured": true, 00:13:21.100 "data_offset": 2048, 00:13:21.100 "data_size": 63488 00:13:21.100 } 00:13:21.100 ] 00:13:21.100 }' 00:13:21.100 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.100 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:21.100 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.100 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:21.100 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:21.101 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.101 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.101 [2024-12-06 11:56:18.696590] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:21.101 [2024-12-06 11:56:18.700230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000256a0 00:13:21.101 [2024-12-06 11:56:18.702356] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:21.101 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.101 11:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:22.041 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.041 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.041 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.041 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.041 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.041 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.041 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.041 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.041 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.041 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.041 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.041 "name": "raid_bdev1", 00:13:22.041 "uuid": "d7bdb2a6-4b03-4e04-b06a-cdc5c7ee1a63", 00:13:22.041 "strip_size_kb": 64, 00:13:22.041 "state": "online", 00:13:22.041 "raid_level": "raid5f", 00:13:22.041 "superblock": true, 00:13:22.041 "num_base_bdevs": 3, 00:13:22.041 "num_base_bdevs_discovered": 3, 00:13:22.041 "num_base_bdevs_operational": 3, 00:13:22.041 "process": { 00:13:22.041 "type": "rebuild", 00:13:22.042 "target": "spare", 00:13:22.042 "progress": { 00:13:22.042 "blocks": 20480, 00:13:22.042 "percent": 16 00:13:22.042 } 00:13:22.042 }, 00:13:22.042 "base_bdevs_list": [ 00:13:22.042 { 00:13:22.042 "name": "spare", 00:13:22.042 "uuid": "7769d4b9-a161-5484-8c4c-c29dc028865e", 00:13:22.042 "is_configured": true, 00:13:22.042 "data_offset": 2048, 00:13:22.042 "data_size": 63488 00:13:22.042 }, 00:13:22.042 { 00:13:22.042 "name": "BaseBdev2", 00:13:22.042 "uuid": "c2aae351-d5d5-5f0f-b430-4644fa9b4f3f", 00:13:22.042 "is_configured": true, 00:13:22.042 "data_offset": 2048, 00:13:22.042 "data_size": 63488 00:13:22.042 }, 00:13:22.042 { 00:13:22.042 "name": "BaseBdev3", 00:13:22.042 "uuid": "3d588aa4-b70d-5c0e-9a43-ab240792f35f", 00:13:22.042 "is_configured": true, 00:13:22.042 "data_offset": 2048, 00:13:22.042 "data_size": 63488 00:13:22.042 } 00:13:22.042 ] 00:13:22.042 }' 00:13:22.042 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.303 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:22.303 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.303 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:22.303 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:22.303 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:22.303 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:22.303 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:13:22.303 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:22.303 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=453 00:13:22.303 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:22.303 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.303 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.303 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.303 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.303 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.303 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.303 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.303 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.303 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.303 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.303 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.303 "name": "raid_bdev1", 00:13:22.303 "uuid": "d7bdb2a6-4b03-4e04-b06a-cdc5c7ee1a63", 00:13:22.303 "strip_size_kb": 64, 00:13:22.303 "state": "online", 00:13:22.303 "raid_level": "raid5f", 00:13:22.303 "superblock": true, 00:13:22.303 "num_base_bdevs": 3, 00:13:22.303 "num_base_bdevs_discovered": 3, 00:13:22.303 "num_base_bdevs_operational": 3, 00:13:22.303 "process": { 00:13:22.303 "type": "rebuild", 00:13:22.303 "target": "spare", 00:13:22.303 "progress": { 00:13:22.303 "blocks": 22528, 00:13:22.303 "percent": 17 00:13:22.303 } 00:13:22.303 }, 00:13:22.303 "base_bdevs_list": [ 00:13:22.303 { 00:13:22.303 "name": "spare", 00:13:22.303 "uuid": "7769d4b9-a161-5484-8c4c-c29dc028865e", 00:13:22.303 "is_configured": true, 00:13:22.303 "data_offset": 2048, 00:13:22.303 "data_size": 63488 00:13:22.303 }, 00:13:22.303 { 00:13:22.303 "name": "BaseBdev2", 00:13:22.303 "uuid": "c2aae351-d5d5-5f0f-b430-4644fa9b4f3f", 00:13:22.303 "is_configured": true, 00:13:22.303 "data_offset": 2048, 00:13:22.303 "data_size": 63488 00:13:22.303 }, 00:13:22.303 { 00:13:22.303 "name": "BaseBdev3", 00:13:22.303 "uuid": "3d588aa4-b70d-5c0e-9a43-ab240792f35f", 00:13:22.303 "is_configured": true, 00:13:22.303 "data_offset": 2048, 00:13:22.303 "data_size": 63488 00:13:22.303 } 00:13:22.303 ] 00:13:22.303 }' 00:13:22.303 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.303 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:22.303 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.303 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:22.303 11:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:23.686 11:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:23.686 11:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.686 11:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.686 11:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.686 11:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.686 11:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.686 11:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.686 11:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.686 11:56:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.686 11:56:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.686 11:56:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.686 11:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.686 "name": "raid_bdev1", 00:13:23.686 "uuid": "d7bdb2a6-4b03-4e04-b06a-cdc5c7ee1a63", 00:13:23.686 "strip_size_kb": 64, 00:13:23.686 "state": "online", 00:13:23.686 "raid_level": "raid5f", 00:13:23.686 "superblock": true, 00:13:23.686 "num_base_bdevs": 3, 00:13:23.686 "num_base_bdevs_discovered": 3, 00:13:23.686 "num_base_bdevs_operational": 3, 00:13:23.686 "process": { 00:13:23.686 "type": "rebuild", 00:13:23.686 "target": "spare", 00:13:23.686 "progress": { 00:13:23.686 "blocks": 47104, 00:13:23.686 "percent": 37 00:13:23.686 } 00:13:23.686 }, 00:13:23.686 "base_bdevs_list": [ 00:13:23.686 { 00:13:23.686 "name": "spare", 00:13:23.686 "uuid": "7769d4b9-a161-5484-8c4c-c29dc028865e", 00:13:23.686 "is_configured": true, 00:13:23.686 "data_offset": 2048, 00:13:23.686 "data_size": 63488 00:13:23.686 }, 00:13:23.686 { 00:13:23.686 "name": "BaseBdev2", 00:13:23.686 "uuid": "c2aae351-d5d5-5f0f-b430-4644fa9b4f3f", 00:13:23.686 "is_configured": true, 00:13:23.686 "data_offset": 2048, 00:13:23.686 "data_size": 63488 00:13:23.686 }, 00:13:23.686 { 00:13:23.686 "name": "BaseBdev3", 00:13:23.686 "uuid": "3d588aa4-b70d-5c0e-9a43-ab240792f35f", 00:13:23.686 "is_configured": true, 00:13:23.686 "data_offset": 2048, 00:13:23.686 "data_size": 63488 00:13:23.686 } 00:13:23.686 ] 00:13:23.686 }' 00:13:23.686 11:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.686 11:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:23.686 11:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.686 11:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:23.686 11:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:24.630 11:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:24.630 11:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:24.630 11:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.630 11:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:24.630 11:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:24.630 11:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.630 11:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.630 11:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.630 11:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.630 11:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.630 11:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.630 11:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.630 "name": "raid_bdev1", 00:13:24.630 "uuid": "d7bdb2a6-4b03-4e04-b06a-cdc5c7ee1a63", 00:13:24.630 "strip_size_kb": 64, 00:13:24.630 "state": "online", 00:13:24.630 "raid_level": "raid5f", 00:13:24.630 "superblock": true, 00:13:24.630 "num_base_bdevs": 3, 00:13:24.630 "num_base_bdevs_discovered": 3, 00:13:24.630 "num_base_bdevs_operational": 3, 00:13:24.630 "process": { 00:13:24.630 "type": "rebuild", 00:13:24.630 "target": "spare", 00:13:24.630 "progress": { 00:13:24.630 "blocks": 69632, 00:13:24.630 "percent": 54 00:13:24.630 } 00:13:24.630 }, 00:13:24.630 "base_bdevs_list": [ 00:13:24.630 { 00:13:24.630 "name": "spare", 00:13:24.630 "uuid": "7769d4b9-a161-5484-8c4c-c29dc028865e", 00:13:24.630 "is_configured": true, 00:13:24.630 "data_offset": 2048, 00:13:24.630 "data_size": 63488 00:13:24.630 }, 00:13:24.630 { 00:13:24.630 "name": "BaseBdev2", 00:13:24.630 "uuid": "c2aae351-d5d5-5f0f-b430-4644fa9b4f3f", 00:13:24.630 "is_configured": true, 00:13:24.630 "data_offset": 2048, 00:13:24.630 "data_size": 63488 00:13:24.630 }, 00:13:24.630 { 00:13:24.630 "name": "BaseBdev3", 00:13:24.630 "uuid": "3d588aa4-b70d-5c0e-9a43-ab240792f35f", 00:13:24.630 "is_configured": true, 00:13:24.630 "data_offset": 2048, 00:13:24.630 "data_size": 63488 00:13:24.630 } 00:13:24.630 ] 00:13:24.630 }' 00:13:24.630 11:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.630 11:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:24.630 11:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.630 11:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.630 11:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:25.572 11:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:25.572 11:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.572 11:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.572 11:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.572 11:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.572 11:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.572 11:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.572 11:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.572 11:56:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.572 11:56:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.572 11:56:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.572 11:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.572 "name": "raid_bdev1", 00:13:25.572 "uuid": "d7bdb2a6-4b03-4e04-b06a-cdc5c7ee1a63", 00:13:25.572 "strip_size_kb": 64, 00:13:25.572 "state": "online", 00:13:25.572 "raid_level": "raid5f", 00:13:25.572 "superblock": true, 00:13:25.572 "num_base_bdevs": 3, 00:13:25.572 "num_base_bdevs_discovered": 3, 00:13:25.572 "num_base_bdevs_operational": 3, 00:13:25.572 "process": { 00:13:25.572 "type": "rebuild", 00:13:25.572 "target": "spare", 00:13:25.572 "progress": { 00:13:25.573 "blocks": 92160, 00:13:25.573 "percent": 72 00:13:25.573 } 00:13:25.573 }, 00:13:25.573 "base_bdevs_list": [ 00:13:25.573 { 00:13:25.573 "name": "spare", 00:13:25.573 "uuid": "7769d4b9-a161-5484-8c4c-c29dc028865e", 00:13:25.573 "is_configured": true, 00:13:25.573 "data_offset": 2048, 00:13:25.573 "data_size": 63488 00:13:25.573 }, 00:13:25.573 { 00:13:25.573 "name": "BaseBdev2", 00:13:25.573 "uuid": "c2aae351-d5d5-5f0f-b430-4644fa9b4f3f", 00:13:25.573 "is_configured": true, 00:13:25.573 "data_offset": 2048, 00:13:25.573 "data_size": 63488 00:13:25.573 }, 00:13:25.573 { 00:13:25.573 "name": "BaseBdev3", 00:13:25.573 "uuid": "3d588aa4-b70d-5c0e-9a43-ab240792f35f", 00:13:25.573 "is_configured": true, 00:13:25.573 "data_offset": 2048, 00:13:25.573 "data_size": 63488 00:13:25.573 } 00:13:25.573 ] 00:13:25.573 }' 00:13:25.573 11:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.832 11:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.832 11:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.832 11:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.832 11:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:26.773 11:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:26.773 11:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.773 11:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.773 11:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.773 11:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.773 11:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.773 11:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.773 11:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.773 11:56:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.773 11:56:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.773 11:56:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.773 11:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.773 "name": "raid_bdev1", 00:13:26.773 "uuid": "d7bdb2a6-4b03-4e04-b06a-cdc5c7ee1a63", 00:13:26.773 "strip_size_kb": 64, 00:13:26.773 "state": "online", 00:13:26.773 "raid_level": "raid5f", 00:13:26.773 "superblock": true, 00:13:26.773 "num_base_bdevs": 3, 00:13:26.773 "num_base_bdevs_discovered": 3, 00:13:26.773 "num_base_bdevs_operational": 3, 00:13:26.773 "process": { 00:13:26.773 "type": "rebuild", 00:13:26.773 "target": "spare", 00:13:26.773 "progress": { 00:13:26.773 "blocks": 114688, 00:13:26.773 "percent": 90 00:13:26.773 } 00:13:26.773 }, 00:13:26.773 "base_bdevs_list": [ 00:13:26.773 { 00:13:26.773 "name": "spare", 00:13:26.773 "uuid": "7769d4b9-a161-5484-8c4c-c29dc028865e", 00:13:26.773 "is_configured": true, 00:13:26.773 "data_offset": 2048, 00:13:26.773 "data_size": 63488 00:13:26.773 }, 00:13:26.773 { 00:13:26.773 "name": "BaseBdev2", 00:13:26.773 "uuid": "c2aae351-d5d5-5f0f-b430-4644fa9b4f3f", 00:13:26.773 "is_configured": true, 00:13:26.773 "data_offset": 2048, 00:13:26.773 "data_size": 63488 00:13:26.773 }, 00:13:26.773 { 00:13:26.773 "name": "BaseBdev3", 00:13:26.773 "uuid": "3d588aa4-b70d-5c0e-9a43-ab240792f35f", 00:13:26.773 "is_configured": true, 00:13:26.773 "data_offset": 2048, 00:13:26.773 "data_size": 63488 00:13:26.773 } 00:13:26.773 ] 00:13:26.773 }' 00:13:26.773 11:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.773 11:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:26.773 11:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.773 11:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.773 11:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:27.343 [2024-12-06 11:56:24.933840] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:27.343 [2024-12-06 11:56:24.933952] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:27.343 [2024-12-06 11:56:24.934086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.912 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:27.912 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.912 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.912 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.912 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.912 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.912 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.912 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.912 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.912 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.912 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.912 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.912 "name": "raid_bdev1", 00:13:27.912 "uuid": "d7bdb2a6-4b03-4e04-b06a-cdc5c7ee1a63", 00:13:27.912 "strip_size_kb": 64, 00:13:27.912 "state": "online", 00:13:27.912 "raid_level": "raid5f", 00:13:27.912 "superblock": true, 00:13:27.912 "num_base_bdevs": 3, 00:13:27.912 "num_base_bdevs_discovered": 3, 00:13:27.912 "num_base_bdevs_operational": 3, 00:13:27.912 "base_bdevs_list": [ 00:13:27.912 { 00:13:27.912 "name": "spare", 00:13:27.912 "uuid": "7769d4b9-a161-5484-8c4c-c29dc028865e", 00:13:27.912 "is_configured": true, 00:13:27.912 "data_offset": 2048, 00:13:27.912 "data_size": 63488 00:13:27.912 }, 00:13:27.912 { 00:13:27.912 "name": "BaseBdev2", 00:13:27.912 "uuid": "c2aae351-d5d5-5f0f-b430-4644fa9b4f3f", 00:13:27.912 "is_configured": true, 00:13:27.912 "data_offset": 2048, 00:13:27.912 "data_size": 63488 00:13:27.912 }, 00:13:27.912 { 00:13:27.912 "name": "BaseBdev3", 00:13:27.912 "uuid": "3d588aa4-b70d-5c0e-9a43-ab240792f35f", 00:13:27.912 "is_configured": true, 00:13:27.912 "data_offset": 2048, 00:13:27.912 "data_size": 63488 00:13:27.912 } 00:13:27.912 ] 00:13:27.912 }' 00:13:27.912 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.912 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:27.912 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.912 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:27.912 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:27.912 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:27.912 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.912 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:27.912 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:27.912 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.172 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.172 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.172 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.172 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.172 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.172 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.172 "name": "raid_bdev1", 00:13:28.172 "uuid": "d7bdb2a6-4b03-4e04-b06a-cdc5c7ee1a63", 00:13:28.172 "strip_size_kb": 64, 00:13:28.172 "state": "online", 00:13:28.172 "raid_level": "raid5f", 00:13:28.172 "superblock": true, 00:13:28.172 "num_base_bdevs": 3, 00:13:28.172 "num_base_bdevs_discovered": 3, 00:13:28.172 "num_base_bdevs_operational": 3, 00:13:28.172 "base_bdevs_list": [ 00:13:28.172 { 00:13:28.172 "name": "spare", 00:13:28.172 "uuid": "7769d4b9-a161-5484-8c4c-c29dc028865e", 00:13:28.172 "is_configured": true, 00:13:28.172 "data_offset": 2048, 00:13:28.172 "data_size": 63488 00:13:28.172 }, 00:13:28.172 { 00:13:28.172 "name": "BaseBdev2", 00:13:28.172 "uuid": "c2aae351-d5d5-5f0f-b430-4644fa9b4f3f", 00:13:28.172 "is_configured": true, 00:13:28.172 "data_offset": 2048, 00:13:28.172 "data_size": 63488 00:13:28.172 }, 00:13:28.172 { 00:13:28.172 "name": "BaseBdev3", 00:13:28.173 "uuid": "3d588aa4-b70d-5c0e-9a43-ab240792f35f", 00:13:28.173 "is_configured": true, 00:13:28.173 "data_offset": 2048, 00:13:28.173 "data_size": 63488 00:13:28.173 } 00:13:28.173 ] 00:13:28.173 }' 00:13:28.173 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.173 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:28.173 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.173 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:28.173 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:28.173 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.173 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.173 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:28.173 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.173 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:28.173 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.173 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.173 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.173 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.173 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.173 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.173 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.173 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.173 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.173 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.173 "name": "raid_bdev1", 00:13:28.173 "uuid": "d7bdb2a6-4b03-4e04-b06a-cdc5c7ee1a63", 00:13:28.173 "strip_size_kb": 64, 00:13:28.173 "state": "online", 00:13:28.173 "raid_level": "raid5f", 00:13:28.173 "superblock": true, 00:13:28.173 "num_base_bdevs": 3, 00:13:28.173 "num_base_bdevs_discovered": 3, 00:13:28.173 "num_base_bdevs_operational": 3, 00:13:28.173 "base_bdevs_list": [ 00:13:28.173 { 00:13:28.173 "name": "spare", 00:13:28.173 "uuid": "7769d4b9-a161-5484-8c4c-c29dc028865e", 00:13:28.173 "is_configured": true, 00:13:28.173 "data_offset": 2048, 00:13:28.173 "data_size": 63488 00:13:28.173 }, 00:13:28.173 { 00:13:28.173 "name": "BaseBdev2", 00:13:28.173 "uuid": "c2aae351-d5d5-5f0f-b430-4644fa9b4f3f", 00:13:28.173 "is_configured": true, 00:13:28.173 "data_offset": 2048, 00:13:28.173 "data_size": 63488 00:13:28.173 }, 00:13:28.173 { 00:13:28.173 "name": "BaseBdev3", 00:13:28.173 "uuid": "3d588aa4-b70d-5c0e-9a43-ab240792f35f", 00:13:28.173 "is_configured": true, 00:13:28.173 "data_offset": 2048, 00:13:28.173 "data_size": 63488 00:13:28.173 } 00:13:28.173 ] 00:13:28.173 }' 00:13:28.173 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.173 11:56:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.742 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:28.742 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.742 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.742 [2024-12-06 11:56:26.241109] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:28.742 [2024-12-06 11:56:26.241203] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:28.742 [2024-12-06 11:56:26.241309] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:28.742 [2024-12-06 11:56:26.241419] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:28.742 [2024-12-06 11:56:26.241474] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:28.742 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.742 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.742 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:28.742 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.742 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.742 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.742 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:28.742 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:28.742 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:28.742 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:28.742 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:28.742 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:28.742 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:28.742 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:28.742 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:28.742 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:28.742 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:28.742 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:28.742 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:28.742 /dev/nbd0 00:13:29.008 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:29.008 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:29.008 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:29.008 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:29.008 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:29.008 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:29.008 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:29.008 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:29.008 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:29.008 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:29.008 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:29.008 1+0 records in 00:13:29.008 1+0 records out 00:13:29.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439873 s, 9.3 MB/s 00:13:29.008 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.008 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:29.008 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.008 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:29.008 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:29.008 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:29.008 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:29.008 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:29.008 /dev/nbd1 00:13:29.270 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:29.270 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:29.270 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:29.270 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:29.270 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:29.270 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:29.270 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:29.270 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:29.270 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:29.270 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:29.270 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:29.270 1+0 records in 00:13:29.270 1+0 records out 00:13:29.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040409 s, 10.1 MB/s 00:13:29.270 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.270 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:29.270 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.270 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:29.270 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:29.270 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:29.270 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:29.270 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:29.270 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:29.270 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:29.270 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:29.270 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:29.270 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:29.270 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.270 11:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:29.527 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:29.527 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:29.527 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:29.527 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.527 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.527 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:29.527 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:29.527 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.527 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.527 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:29.527 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:29.527 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:29.527 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:29.527 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.527 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.785 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:29.785 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:29.785 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.785 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:29.785 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:29.785 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.785 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.785 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.785 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:29.785 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.785 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.785 [2024-12-06 11:56:27.306799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:29.785 [2024-12-06 11:56:27.306920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.785 [2024-12-06 11:56:27.306967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:29.785 [2024-12-06 11:56:27.306999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.785 [2024-12-06 11:56:27.309551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.785 [2024-12-06 11:56:27.309624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:29.785 [2024-12-06 11:56:27.309740] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:29.785 [2024-12-06 11:56:27.309808] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:29.785 [2024-12-06 11:56:27.309979] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:29.785 [2024-12-06 11:56:27.310117] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:29.785 spare 00:13:29.785 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.785 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:29.785 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.785 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.785 [2024-12-06 11:56:27.410043] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:13:29.785 [2024-12-06 11:56:27.410106] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:29.785 [2024-12-06 11:56:27.410433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043d50 00:13:29.785 [2024-12-06 11:56:27.410917] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:13:29.785 [2024-12-06 11:56:27.410970] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:13:29.785 [2024-12-06 11:56:27.411158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.785 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.785 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:29.785 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.785 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.785 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:29.785 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.785 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:29.785 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.785 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.785 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.786 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.786 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.786 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.786 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.786 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.786 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.786 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.786 "name": "raid_bdev1", 00:13:29.786 "uuid": "d7bdb2a6-4b03-4e04-b06a-cdc5c7ee1a63", 00:13:29.786 "strip_size_kb": 64, 00:13:29.786 "state": "online", 00:13:29.786 "raid_level": "raid5f", 00:13:29.786 "superblock": true, 00:13:29.786 "num_base_bdevs": 3, 00:13:29.786 "num_base_bdevs_discovered": 3, 00:13:29.786 "num_base_bdevs_operational": 3, 00:13:29.786 "base_bdevs_list": [ 00:13:29.786 { 00:13:29.786 "name": "spare", 00:13:29.786 "uuid": "7769d4b9-a161-5484-8c4c-c29dc028865e", 00:13:29.786 "is_configured": true, 00:13:29.786 "data_offset": 2048, 00:13:29.786 "data_size": 63488 00:13:29.786 }, 00:13:29.786 { 00:13:29.786 "name": "BaseBdev2", 00:13:29.786 "uuid": "c2aae351-d5d5-5f0f-b430-4644fa9b4f3f", 00:13:29.786 "is_configured": true, 00:13:29.786 "data_offset": 2048, 00:13:29.786 "data_size": 63488 00:13:29.786 }, 00:13:29.786 { 00:13:29.786 "name": "BaseBdev3", 00:13:29.786 "uuid": "3d588aa4-b70d-5c0e-9a43-ab240792f35f", 00:13:29.786 "is_configured": true, 00:13:29.786 "data_offset": 2048, 00:13:29.786 "data_size": 63488 00:13:29.786 } 00:13:29.786 ] 00:13:29.786 }' 00:13:29.786 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.786 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.352 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:30.352 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.352 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:30.352 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:30.352 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.352 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.352 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.352 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.352 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.352 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.352 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.352 "name": "raid_bdev1", 00:13:30.352 "uuid": "d7bdb2a6-4b03-4e04-b06a-cdc5c7ee1a63", 00:13:30.352 "strip_size_kb": 64, 00:13:30.352 "state": "online", 00:13:30.352 "raid_level": "raid5f", 00:13:30.352 "superblock": true, 00:13:30.352 "num_base_bdevs": 3, 00:13:30.352 "num_base_bdevs_discovered": 3, 00:13:30.352 "num_base_bdevs_operational": 3, 00:13:30.352 "base_bdevs_list": [ 00:13:30.352 { 00:13:30.352 "name": "spare", 00:13:30.352 "uuid": "7769d4b9-a161-5484-8c4c-c29dc028865e", 00:13:30.352 "is_configured": true, 00:13:30.352 "data_offset": 2048, 00:13:30.352 "data_size": 63488 00:13:30.352 }, 00:13:30.352 { 00:13:30.352 "name": "BaseBdev2", 00:13:30.352 "uuid": "c2aae351-d5d5-5f0f-b430-4644fa9b4f3f", 00:13:30.352 "is_configured": true, 00:13:30.352 "data_offset": 2048, 00:13:30.352 "data_size": 63488 00:13:30.352 }, 00:13:30.352 { 00:13:30.352 "name": "BaseBdev3", 00:13:30.352 "uuid": "3d588aa4-b70d-5c0e-9a43-ab240792f35f", 00:13:30.352 "is_configured": true, 00:13:30.352 "data_offset": 2048, 00:13:30.352 "data_size": 63488 00:13:30.352 } 00:13:30.352 ] 00:13:30.352 }' 00:13:30.352 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.352 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:30.352 11:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.352 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:30.352 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.352 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:30.352 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.352 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.352 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.352 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:30.352 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:30.352 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.352 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.352 [2024-12-06 11:56:28.078195] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:30.352 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.352 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:30.352 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.352 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.352 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:30.352 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.352 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:30.352 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.352 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.352 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.352 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.352 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.352 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.352 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.352 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.612 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.612 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.612 "name": "raid_bdev1", 00:13:30.612 "uuid": "d7bdb2a6-4b03-4e04-b06a-cdc5c7ee1a63", 00:13:30.612 "strip_size_kb": 64, 00:13:30.612 "state": "online", 00:13:30.612 "raid_level": "raid5f", 00:13:30.612 "superblock": true, 00:13:30.612 "num_base_bdevs": 3, 00:13:30.612 "num_base_bdevs_discovered": 2, 00:13:30.612 "num_base_bdevs_operational": 2, 00:13:30.612 "base_bdevs_list": [ 00:13:30.612 { 00:13:30.612 "name": null, 00:13:30.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.612 "is_configured": false, 00:13:30.612 "data_offset": 0, 00:13:30.612 "data_size": 63488 00:13:30.612 }, 00:13:30.612 { 00:13:30.612 "name": "BaseBdev2", 00:13:30.612 "uuid": "c2aae351-d5d5-5f0f-b430-4644fa9b4f3f", 00:13:30.612 "is_configured": true, 00:13:30.612 "data_offset": 2048, 00:13:30.612 "data_size": 63488 00:13:30.612 }, 00:13:30.612 { 00:13:30.612 "name": "BaseBdev3", 00:13:30.612 "uuid": "3d588aa4-b70d-5c0e-9a43-ab240792f35f", 00:13:30.612 "is_configured": true, 00:13:30.612 "data_offset": 2048, 00:13:30.612 "data_size": 63488 00:13:30.612 } 00:13:30.612 ] 00:13:30.612 }' 00:13:30.612 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.612 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.871 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:30.871 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.871 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.871 [2024-12-06 11:56:28.501329] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:30.871 [2024-12-06 11:56:28.501602] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:30.871 [2024-12-06 11:56:28.501663] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:30.871 [2024-12-06 11:56:28.501725] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:30.871 [2024-12-06 11:56:28.505396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043e20 00:13:30.871 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.871 11:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:30.871 [2024-12-06 11:56:28.507543] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:31.808 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:31.808 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.808 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:31.808 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:31.808 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.808 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.808 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.808 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.808 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.808 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.068 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.068 "name": "raid_bdev1", 00:13:32.068 "uuid": "d7bdb2a6-4b03-4e04-b06a-cdc5c7ee1a63", 00:13:32.068 "strip_size_kb": 64, 00:13:32.068 "state": "online", 00:13:32.068 "raid_level": "raid5f", 00:13:32.068 "superblock": true, 00:13:32.068 "num_base_bdevs": 3, 00:13:32.068 "num_base_bdevs_discovered": 3, 00:13:32.068 "num_base_bdevs_operational": 3, 00:13:32.068 "process": { 00:13:32.068 "type": "rebuild", 00:13:32.068 "target": "spare", 00:13:32.068 "progress": { 00:13:32.068 "blocks": 20480, 00:13:32.068 "percent": 16 00:13:32.068 } 00:13:32.068 }, 00:13:32.068 "base_bdevs_list": [ 00:13:32.068 { 00:13:32.068 "name": "spare", 00:13:32.068 "uuid": "7769d4b9-a161-5484-8c4c-c29dc028865e", 00:13:32.068 "is_configured": true, 00:13:32.068 "data_offset": 2048, 00:13:32.068 "data_size": 63488 00:13:32.068 }, 00:13:32.068 { 00:13:32.068 "name": "BaseBdev2", 00:13:32.068 "uuid": "c2aae351-d5d5-5f0f-b430-4644fa9b4f3f", 00:13:32.068 "is_configured": true, 00:13:32.068 "data_offset": 2048, 00:13:32.068 "data_size": 63488 00:13:32.068 }, 00:13:32.068 { 00:13:32.068 "name": "BaseBdev3", 00:13:32.068 "uuid": "3d588aa4-b70d-5c0e-9a43-ab240792f35f", 00:13:32.068 "is_configured": true, 00:13:32.068 "data_offset": 2048, 00:13:32.068 "data_size": 63488 00:13:32.068 } 00:13:32.068 ] 00:13:32.068 }' 00:13:32.068 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.068 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:32.068 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.068 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:32.068 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:32.068 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.068 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.068 [2024-12-06 11:56:29.672555] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:32.068 [2024-12-06 11:56:29.713824] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:32.068 [2024-12-06 11:56:29.713918] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.068 [2024-12-06 11:56:29.713970] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:32.068 [2024-12-06 11:56:29.713991] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:32.068 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.068 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:32.068 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.068 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.068 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:32.068 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:32.068 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:32.068 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.068 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.068 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.068 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.068 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.068 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.068 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.068 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.068 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.068 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.068 "name": "raid_bdev1", 00:13:32.068 "uuid": "d7bdb2a6-4b03-4e04-b06a-cdc5c7ee1a63", 00:13:32.068 "strip_size_kb": 64, 00:13:32.068 "state": "online", 00:13:32.068 "raid_level": "raid5f", 00:13:32.068 "superblock": true, 00:13:32.068 "num_base_bdevs": 3, 00:13:32.068 "num_base_bdevs_discovered": 2, 00:13:32.068 "num_base_bdevs_operational": 2, 00:13:32.068 "base_bdevs_list": [ 00:13:32.068 { 00:13:32.068 "name": null, 00:13:32.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.068 "is_configured": false, 00:13:32.068 "data_offset": 0, 00:13:32.068 "data_size": 63488 00:13:32.068 }, 00:13:32.068 { 00:13:32.068 "name": "BaseBdev2", 00:13:32.068 "uuid": "c2aae351-d5d5-5f0f-b430-4644fa9b4f3f", 00:13:32.068 "is_configured": true, 00:13:32.068 "data_offset": 2048, 00:13:32.068 "data_size": 63488 00:13:32.068 }, 00:13:32.068 { 00:13:32.068 "name": "BaseBdev3", 00:13:32.068 "uuid": "3d588aa4-b70d-5c0e-9a43-ab240792f35f", 00:13:32.068 "is_configured": true, 00:13:32.068 "data_offset": 2048, 00:13:32.068 "data_size": 63488 00:13:32.068 } 00:13:32.068 ] 00:13:32.068 }' 00:13:32.068 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.068 11:56:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.636 11:56:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:32.636 11:56:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.636 11:56:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.637 [2024-12-06 11:56:30.166065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:32.637 [2024-12-06 11:56:30.166170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.637 [2024-12-06 11:56:30.166209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:32.637 [2024-12-06 11:56:30.166245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.637 [2024-12-06 11:56:30.166713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.637 [2024-12-06 11:56:30.166769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:32.637 [2024-12-06 11:56:30.166876] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:32.637 [2024-12-06 11:56:30.166913] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:32.637 [2024-12-06 11:56:30.166952] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:32.637 [2024-12-06 11:56:30.167019] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:32.637 [2024-12-06 11:56:30.170013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043ef0 00:13:32.637 spare 00:13:32.637 11:56:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.637 11:56:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:32.637 [2024-12-06 11:56:30.172114] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:33.575 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:33.575 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.575 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:33.575 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:33.575 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.575 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.575 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.575 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.575 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.575 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.575 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.575 "name": "raid_bdev1", 00:13:33.575 "uuid": "d7bdb2a6-4b03-4e04-b06a-cdc5c7ee1a63", 00:13:33.575 "strip_size_kb": 64, 00:13:33.575 "state": "online", 00:13:33.575 "raid_level": "raid5f", 00:13:33.575 "superblock": true, 00:13:33.575 "num_base_bdevs": 3, 00:13:33.575 "num_base_bdevs_discovered": 3, 00:13:33.575 "num_base_bdevs_operational": 3, 00:13:33.575 "process": { 00:13:33.575 "type": "rebuild", 00:13:33.575 "target": "spare", 00:13:33.575 "progress": { 00:13:33.575 "blocks": 20480, 00:13:33.575 "percent": 16 00:13:33.575 } 00:13:33.575 }, 00:13:33.575 "base_bdevs_list": [ 00:13:33.575 { 00:13:33.575 "name": "spare", 00:13:33.575 "uuid": "7769d4b9-a161-5484-8c4c-c29dc028865e", 00:13:33.575 "is_configured": true, 00:13:33.575 "data_offset": 2048, 00:13:33.575 "data_size": 63488 00:13:33.575 }, 00:13:33.575 { 00:13:33.575 "name": "BaseBdev2", 00:13:33.575 "uuid": "c2aae351-d5d5-5f0f-b430-4644fa9b4f3f", 00:13:33.575 "is_configured": true, 00:13:33.575 "data_offset": 2048, 00:13:33.575 "data_size": 63488 00:13:33.575 }, 00:13:33.575 { 00:13:33.575 "name": "BaseBdev3", 00:13:33.575 "uuid": "3d588aa4-b70d-5c0e-9a43-ab240792f35f", 00:13:33.575 "is_configured": true, 00:13:33.575 "data_offset": 2048, 00:13:33.575 "data_size": 63488 00:13:33.575 } 00:13:33.575 ] 00:13:33.575 }' 00:13:33.575 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.575 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:33.575 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.575 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:33.575 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:33.575 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.575 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.835 [2024-12-06 11:56:31.332553] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:33.835 [2024-12-06 11:56:31.378367] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:33.835 [2024-12-06 11:56:31.378467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.835 [2024-12-06 11:56:31.378501] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:33.835 [2024-12-06 11:56:31.378525] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:33.835 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.835 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:33.835 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.835 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.835 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:33.835 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.835 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:33.835 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.835 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.835 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.835 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.835 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.835 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.835 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.835 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.835 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.835 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.835 "name": "raid_bdev1", 00:13:33.835 "uuid": "d7bdb2a6-4b03-4e04-b06a-cdc5c7ee1a63", 00:13:33.835 "strip_size_kb": 64, 00:13:33.835 "state": "online", 00:13:33.835 "raid_level": "raid5f", 00:13:33.835 "superblock": true, 00:13:33.835 "num_base_bdevs": 3, 00:13:33.835 "num_base_bdevs_discovered": 2, 00:13:33.835 "num_base_bdevs_operational": 2, 00:13:33.835 "base_bdevs_list": [ 00:13:33.835 { 00:13:33.835 "name": null, 00:13:33.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.835 "is_configured": false, 00:13:33.835 "data_offset": 0, 00:13:33.835 "data_size": 63488 00:13:33.835 }, 00:13:33.835 { 00:13:33.835 "name": "BaseBdev2", 00:13:33.835 "uuid": "c2aae351-d5d5-5f0f-b430-4644fa9b4f3f", 00:13:33.835 "is_configured": true, 00:13:33.835 "data_offset": 2048, 00:13:33.835 "data_size": 63488 00:13:33.835 }, 00:13:33.835 { 00:13:33.835 "name": "BaseBdev3", 00:13:33.835 "uuid": "3d588aa4-b70d-5c0e-9a43-ab240792f35f", 00:13:33.835 "is_configured": true, 00:13:33.835 "data_offset": 2048, 00:13:33.835 "data_size": 63488 00:13:33.835 } 00:13:33.835 ] 00:13:33.835 }' 00:13:33.835 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.835 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.094 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:34.094 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.094 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:34.094 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:34.094 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.094 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.094 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.094 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.094 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.094 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.354 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.354 "name": "raid_bdev1", 00:13:34.354 "uuid": "d7bdb2a6-4b03-4e04-b06a-cdc5c7ee1a63", 00:13:34.354 "strip_size_kb": 64, 00:13:34.354 "state": "online", 00:13:34.354 "raid_level": "raid5f", 00:13:34.354 "superblock": true, 00:13:34.354 "num_base_bdevs": 3, 00:13:34.354 "num_base_bdevs_discovered": 2, 00:13:34.354 "num_base_bdevs_operational": 2, 00:13:34.354 "base_bdevs_list": [ 00:13:34.354 { 00:13:34.354 "name": null, 00:13:34.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.354 "is_configured": false, 00:13:34.354 "data_offset": 0, 00:13:34.354 "data_size": 63488 00:13:34.354 }, 00:13:34.354 { 00:13:34.354 "name": "BaseBdev2", 00:13:34.354 "uuid": "c2aae351-d5d5-5f0f-b430-4644fa9b4f3f", 00:13:34.354 "is_configured": true, 00:13:34.354 "data_offset": 2048, 00:13:34.354 "data_size": 63488 00:13:34.354 }, 00:13:34.354 { 00:13:34.354 "name": "BaseBdev3", 00:13:34.354 "uuid": "3d588aa4-b70d-5c0e-9a43-ab240792f35f", 00:13:34.354 "is_configured": true, 00:13:34.354 "data_offset": 2048, 00:13:34.354 "data_size": 63488 00:13:34.354 } 00:13:34.354 ] 00:13:34.354 }' 00:13:34.354 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.354 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:34.354 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.354 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:34.354 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:34.354 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.354 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.354 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.354 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:34.354 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.354 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.354 [2024-12-06 11:56:31.962280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:34.354 [2024-12-06 11:56:31.962369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.354 [2024-12-06 11:56:31.962407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:34.354 [2024-12-06 11:56:31.962423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.354 [2024-12-06 11:56:31.962816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.354 [2024-12-06 11:56:31.962836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:34.354 [2024-12-06 11:56:31.962901] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:34.354 [2024-12-06 11:56:31.962916] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:34.354 [2024-12-06 11:56:31.962932] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:34.354 [2024-12-06 11:56:31.962942] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:34.354 BaseBdev1 00:13:34.354 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.354 11:56:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:35.293 11:56:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:35.293 11:56:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.293 11:56:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.293 11:56:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:35.293 11:56:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.293 11:56:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:35.293 11:56:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.293 11:56:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.293 11:56:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.293 11:56:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.293 11:56:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.293 11:56:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.293 11:56:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.293 11:56:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.293 11:56:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.293 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.293 "name": "raid_bdev1", 00:13:35.293 "uuid": "d7bdb2a6-4b03-4e04-b06a-cdc5c7ee1a63", 00:13:35.293 "strip_size_kb": 64, 00:13:35.293 "state": "online", 00:13:35.293 "raid_level": "raid5f", 00:13:35.293 "superblock": true, 00:13:35.293 "num_base_bdevs": 3, 00:13:35.293 "num_base_bdevs_discovered": 2, 00:13:35.293 "num_base_bdevs_operational": 2, 00:13:35.293 "base_bdevs_list": [ 00:13:35.293 { 00:13:35.293 "name": null, 00:13:35.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.293 "is_configured": false, 00:13:35.293 "data_offset": 0, 00:13:35.293 "data_size": 63488 00:13:35.293 }, 00:13:35.293 { 00:13:35.293 "name": "BaseBdev2", 00:13:35.293 "uuid": "c2aae351-d5d5-5f0f-b430-4644fa9b4f3f", 00:13:35.293 "is_configured": true, 00:13:35.293 "data_offset": 2048, 00:13:35.293 "data_size": 63488 00:13:35.293 }, 00:13:35.293 { 00:13:35.293 "name": "BaseBdev3", 00:13:35.293 "uuid": "3d588aa4-b70d-5c0e-9a43-ab240792f35f", 00:13:35.293 "is_configured": true, 00:13:35.293 "data_offset": 2048, 00:13:35.293 "data_size": 63488 00:13:35.293 } 00:13:35.293 ] 00:13:35.293 }' 00:13:35.293 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.293 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.862 "name": "raid_bdev1", 00:13:35.862 "uuid": "d7bdb2a6-4b03-4e04-b06a-cdc5c7ee1a63", 00:13:35.862 "strip_size_kb": 64, 00:13:35.862 "state": "online", 00:13:35.862 "raid_level": "raid5f", 00:13:35.862 "superblock": true, 00:13:35.862 "num_base_bdevs": 3, 00:13:35.862 "num_base_bdevs_discovered": 2, 00:13:35.862 "num_base_bdevs_operational": 2, 00:13:35.862 "base_bdevs_list": [ 00:13:35.862 { 00:13:35.862 "name": null, 00:13:35.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.862 "is_configured": false, 00:13:35.862 "data_offset": 0, 00:13:35.862 "data_size": 63488 00:13:35.862 }, 00:13:35.862 { 00:13:35.862 "name": "BaseBdev2", 00:13:35.862 "uuid": "c2aae351-d5d5-5f0f-b430-4644fa9b4f3f", 00:13:35.862 "is_configured": true, 00:13:35.862 "data_offset": 2048, 00:13:35.862 "data_size": 63488 00:13:35.862 }, 00:13:35.862 { 00:13:35.862 "name": "BaseBdev3", 00:13:35.862 "uuid": "3d588aa4-b70d-5c0e-9a43-ab240792f35f", 00:13:35.862 "is_configured": true, 00:13:35.862 "data_offset": 2048, 00:13:35.862 "data_size": 63488 00:13:35.862 } 00:13:35.862 ] 00:13:35.862 }' 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.862 [2024-12-06 11:56:33.504337] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:35.862 [2024-12-06 11:56:33.504543] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:35.862 [2024-12-06 11:56:33.504605] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:35.862 request: 00:13:35.862 { 00:13:35.862 "base_bdev": "BaseBdev1", 00:13:35.862 "raid_bdev": "raid_bdev1", 00:13:35.862 "method": "bdev_raid_add_base_bdev", 00:13:35.862 "req_id": 1 00:13:35.862 } 00:13:35.862 Got JSON-RPC error response 00:13:35.862 response: 00:13:35.862 { 00:13:35.862 "code": -22, 00:13:35.862 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:35.862 } 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:35.862 11:56:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:36.801 11:56:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:36.801 11:56:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.802 11:56:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.802 11:56:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:36.802 11:56:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.802 11:56:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:36.802 11:56:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.802 11:56:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.802 11:56:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.802 11:56:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.802 11:56:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.802 11:56:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.802 11:56:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.802 11:56:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.802 11:56:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.062 11:56:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.062 "name": "raid_bdev1", 00:13:37.062 "uuid": "d7bdb2a6-4b03-4e04-b06a-cdc5c7ee1a63", 00:13:37.062 "strip_size_kb": 64, 00:13:37.062 "state": "online", 00:13:37.062 "raid_level": "raid5f", 00:13:37.062 "superblock": true, 00:13:37.062 "num_base_bdevs": 3, 00:13:37.062 "num_base_bdevs_discovered": 2, 00:13:37.062 "num_base_bdevs_operational": 2, 00:13:37.062 "base_bdevs_list": [ 00:13:37.062 { 00:13:37.062 "name": null, 00:13:37.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.062 "is_configured": false, 00:13:37.062 "data_offset": 0, 00:13:37.062 "data_size": 63488 00:13:37.062 }, 00:13:37.062 { 00:13:37.062 "name": "BaseBdev2", 00:13:37.062 "uuid": "c2aae351-d5d5-5f0f-b430-4644fa9b4f3f", 00:13:37.062 "is_configured": true, 00:13:37.062 "data_offset": 2048, 00:13:37.062 "data_size": 63488 00:13:37.062 }, 00:13:37.062 { 00:13:37.062 "name": "BaseBdev3", 00:13:37.062 "uuid": "3d588aa4-b70d-5c0e-9a43-ab240792f35f", 00:13:37.062 "is_configured": true, 00:13:37.062 "data_offset": 2048, 00:13:37.062 "data_size": 63488 00:13:37.062 } 00:13:37.062 ] 00:13:37.062 }' 00:13:37.062 11:56:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.062 11:56:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.322 11:56:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:37.322 11:56:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.322 11:56:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:37.322 11:56:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:37.322 11:56:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.322 11:56:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.322 11:56:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.322 11:56:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.322 11:56:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.322 11:56:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.322 11:56:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.322 "name": "raid_bdev1", 00:13:37.322 "uuid": "d7bdb2a6-4b03-4e04-b06a-cdc5c7ee1a63", 00:13:37.322 "strip_size_kb": 64, 00:13:37.322 "state": "online", 00:13:37.322 "raid_level": "raid5f", 00:13:37.322 "superblock": true, 00:13:37.322 "num_base_bdevs": 3, 00:13:37.322 "num_base_bdevs_discovered": 2, 00:13:37.322 "num_base_bdevs_operational": 2, 00:13:37.322 "base_bdevs_list": [ 00:13:37.322 { 00:13:37.322 "name": null, 00:13:37.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.322 "is_configured": false, 00:13:37.322 "data_offset": 0, 00:13:37.322 "data_size": 63488 00:13:37.322 }, 00:13:37.322 { 00:13:37.322 "name": "BaseBdev2", 00:13:37.322 "uuid": "c2aae351-d5d5-5f0f-b430-4644fa9b4f3f", 00:13:37.322 "is_configured": true, 00:13:37.322 "data_offset": 2048, 00:13:37.322 "data_size": 63488 00:13:37.322 }, 00:13:37.322 { 00:13:37.322 "name": "BaseBdev3", 00:13:37.322 "uuid": "3d588aa4-b70d-5c0e-9a43-ab240792f35f", 00:13:37.322 "is_configured": true, 00:13:37.322 "data_offset": 2048, 00:13:37.322 "data_size": 63488 00:13:37.322 } 00:13:37.322 ] 00:13:37.322 }' 00:13:37.322 11:56:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.581 11:56:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:37.581 11:56:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.581 11:56:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:37.581 11:56:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 92107 00:13:37.581 11:56:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 92107 ']' 00:13:37.581 11:56:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 92107 00:13:37.581 11:56:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:37.581 11:56:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:37.581 11:56:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92107 00:13:37.581 11:56:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:37.581 11:56:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:37.581 11:56:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92107' 00:13:37.581 killing process with pid 92107 00:13:37.581 11:56:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 92107 00:13:37.581 Received shutdown signal, test time was about 60.000000 seconds 00:13:37.581 00:13:37.581 Latency(us) 00:13:37.581 [2024-12-06T11:56:35.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.581 [2024-12-06T11:56:35.339Z] =================================================================================================================== 00:13:37.581 [2024-12-06T11:56:35.340Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:37.582 [2024-12-06 11:56:35.168566] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:37.582 [2024-12-06 11:56:35.168681] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.582 [2024-12-06 11:56:35.168743] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:37.582 [2024-12-06 11:56:35.168752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:13:37.582 11:56:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 92107 00:13:37.582 [2024-12-06 11:56:35.209874] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:37.841 11:56:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:37.841 00:13:37.841 real 0m21.338s 00:13:37.841 user 0m27.745s 00:13:37.841 sys 0m2.637s 00:13:37.841 ************************************ 00:13:37.841 END TEST raid5f_rebuild_test_sb 00:13:37.841 ************************************ 00:13:37.841 11:56:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:37.841 11:56:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.841 11:56:35 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:13:37.841 11:56:35 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:13:37.841 11:56:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:37.841 11:56:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:37.841 11:56:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:37.841 ************************************ 00:13:37.841 START TEST raid5f_state_function_test 00:13:37.841 ************************************ 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=92839 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:37.841 Process raid pid: 92839 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 92839' 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 92839 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 92839 ']' 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:37.841 11:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.100 [2024-12-06 11:56:35.612634] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:38.100 [2024-12-06 11:56:35.612868] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.100 [2024-12-06 11:56:35.760583] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.100 [2024-12-06 11:56:35.807061] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.100 [2024-12-06 11:56:35.849554] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.100 [2024-12-06 11:56:35.849650] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.670 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:38.670 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:13:38.670 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:38.670 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.670 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.670 [2024-12-06 11:56:36.422849] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:38.670 [2024-12-06 11:56:36.422972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:38.670 [2024-12-06 11:56:36.423004] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:38.670 [2024-12-06 11:56:36.423027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:38.670 [2024-12-06 11:56:36.423044] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:38.670 [2024-12-06 11:56:36.423068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:38.670 [2024-12-06 11:56:36.423100] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:38.670 [2024-12-06 11:56:36.423141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:38.931 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.931 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:38.931 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.931 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.931 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:38.931 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:38.931 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:38.931 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.931 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.931 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.931 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.931 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.931 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.931 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.931 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.931 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.931 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.931 "name": "Existed_Raid", 00:13:38.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.931 "strip_size_kb": 64, 00:13:38.931 "state": "configuring", 00:13:38.931 "raid_level": "raid5f", 00:13:38.931 "superblock": false, 00:13:38.931 "num_base_bdevs": 4, 00:13:38.931 "num_base_bdevs_discovered": 0, 00:13:38.931 "num_base_bdevs_operational": 4, 00:13:38.931 "base_bdevs_list": [ 00:13:38.931 { 00:13:38.931 "name": "BaseBdev1", 00:13:38.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.931 "is_configured": false, 00:13:38.931 "data_offset": 0, 00:13:38.931 "data_size": 0 00:13:38.931 }, 00:13:38.931 { 00:13:38.931 "name": "BaseBdev2", 00:13:38.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.931 "is_configured": false, 00:13:38.931 "data_offset": 0, 00:13:38.931 "data_size": 0 00:13:38.931 }, 00:13:38.931 { 00:13:38.931 "name": "BaseBdev3", 00:13:38.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.931 "is_configured": false, 00:13:38.931 "data_offset": 0, 00:13:38.931 "data_size": 0 00:13:38.931 }, 00:13:38.931 { 00:13:38.931 "name": "BaseBdev4", 00:13:38.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.931 "is_configured": false, 00:13:38.931 "data_offset": 0, 00:13:38.931 "data_size": 0 00:13:38.931 } 00:13:38.931 ] 00:13:38.931 }' 00:13:38.931 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.931 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.191 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:39.192 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.192 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.192 [2024-12-06 11:56:36.905892] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:39.192 [2024-12-06 11:56:36.905932] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:13:39.192 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.192 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:39.192 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.192 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.192 [2024-12-06 11:56:36.917895] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:39.192 [2024-12-06 11:56:36.917934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:39.192 [2024-12-06 11:56:36.917942] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:39.192 [2024-12-06 11:56:36.917950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:39.192 [2024-12-06 11:56:36.917956] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:39.192 [2024-12-06 11:56:36.917964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:39.192 [2024-12-06 11:56:36.917969] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:39.192 [2024-12-06 11:56:36.917976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:39.192 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.192 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:39.192 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.192 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.192 [2024-12-06 11:56:36.938555] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:39.192 BaseBdev1 00:13:39.192 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.192 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:39.192 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:39.192 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:39.192 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:39.192 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:39.192 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:39.192 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:39.192 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.192 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.452 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.452 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:39.452 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.452 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.452 [ 00:13:39.452 { 00:13:39.452 "name": "BaseBdev1", 00:13:39.452 "aliases": [ 00:13:39.452 "82260e74-5c8d-40fc-8eaa-4830c28b2565" 00:13:39.452 ], 00:13:39.452 "product_name": "Malloc disk", 00:13:39.452 "block_size": 512, 00:13:39.452 "num_blocks": 65536, 00:13:39.452 "uuid": "82260e74-5c8d-40fc-8eaa-4830c28b2565", 00:13:39.452 "assigned_rate_limits": { 00:13:39.452 "rw_ios_per_sec": 0, 00:13:39.452 "rw_mbytes_per_sec": 0, 00:13:39.452 "r_mbytes_per_sec": 0, 00:13:39.452 "w_mbytes_per_sec": 0 00:13:39.452 }, 00:13:39.452 "claimed": true, 00:13:39.452 "claim_type": "exclusive_write", 00:13:39.452 "zoned": false, 00:13:39.452 "supported_io_types": { 00:13:39.452 "read": true, 00:13:39.452 "write": true, 00:13:39.452 "unmap": true, 00:13:39.452 "flush": true, 00:13:39.452 "reset": true, 00:13:39.452 "nvme_admin": false, 00:13:39.452 "nvme_io": false, 00:13:39.452 "nvme_io_md": false, 00:13:39.452 "write_zeroes": true, 00:13:39.452 "zcopy": true, 00:13:39.452 "get_zone_info": false, 00:13:39.452 "zone_management": false, 00:13:39.452 "zone_append": false, 00:13:39.452 "compare": false, 00:13:39.452 "compare_and_write": false, 00:13:39.452 "abort": true, 00:13:39.452 "seek_hole": false, 00:13:39.452 "seek_data": false, 00:13:39.452 "copy": true, 00:13:39.452 "nvme_iov_md": false 00:13:39.452 }, 00:13:39.452 "memory_domains": [ 00:13:39.452 { 00:13:39.452 "dma_device_id": "system", 00:13:39.452 "dma_device_type": 1 00:13:39.452 }, 00:13:39.452 { 00:13:39.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.452 "dma_device_type": 2 00:13:39.452 } 00:13:39.452 ], 00:13:39.452 "driver_specific": {} 00:13:39.452 } 00:13:39.452 ] 00:13:39.452 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.452 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:39.452 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:39.452 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.452 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.452 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:39.452 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.452 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:39.452 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.452 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.452 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.452 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.452 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.452 11:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.452 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.452 11:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.452 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.452 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.452 "name": "Existed_Raid", 00:13:39.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.452 "strip_size_kb": 64, 00:13:39.452 "state": "configuring", 00:13:39.452 "raid_level": "raid5f", 00:13:39.452 "superblock": false, 00:13:39.452 "num_base_bdevs": 4, 00:13:39.452 "num_base_bdevs_discovered": 1, 00:13:39.452 "num_base_bdevs_operational": 4, 00:13:39.452 "base_bdevs_list": [ 00:13:39.452 { 00:13:39.452 "name": "BaseBdev1", 00:13:39.452 "uuid": "82260e74-5c8d-40fc-8eaa-4830c28b2565", 00:13:39.452 "is_configured": true, 00:13:39.452 "data_offset": 0, 00:13:39.452 "data_size": 65536 00:13:39.452 }, 00:13:39.452 { 00:13:39.452 "name": "BaseBdev2", 00:13:39.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.452 "is_configured": false, 00:13:39.452 "data_offset": 0, 00:13:39.452 "data_size": 0 00:13:39.452 }, 00:13:39.452 { 00:13:39.452 "name": "BaseBdev3", 00:13:39.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.452 "is_configured": false, 00:13:39.452 "data_offset": 0, 00:13:39.452 "data_size": 0 00:13:39.452 }, 00:13:39.452 { 00:13:39.452 "name": "BaseBdev4", 00:13:39.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.452 "is_configured": false, 00:13:39.452 "data_offset": 0, 00:13:39.452 "data_size": 0 00:13:39.452 } 00:13:39.452 ] 00:13:39.452 }' 00:13:39.452 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.452 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.713 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:39.713 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.713 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.713 [2024-12-06 11:56:37.405795] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:39.713 [2024-12-06 11:56:37.405843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:13:39.713 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.713 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:39.713 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.713 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.713 [2024-12-06 11:56:37.413843] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:39.713 [2024-12-06 11:56:37.415596] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:39.713 [2024-12-06 11:56:37.415639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:39.713 [2024-12-06 11:56:37.415648] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:39.713 [2024-12-06 11:56:37.415657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:39.713 [2024-12-06 11:56:37.415663] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:39.713 [2024-12-06 11:56:37.415671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:39.713 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.713 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:39.713 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:39.713 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:39.713 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.713 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.713 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:39.713 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.713 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:39.713 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.713 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.713 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.713 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.713 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.713 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.713 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.713 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.713 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.973 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.973 "name": "Existed_Raid", 00:13:39.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.973 "strip_size_kb": 64, 00:13:39.973 "state": "configuring", 00:13:39.973 "raid_level": "raid5f", 00:13:39.973 "superblock": false, 00:13:39.973 "num_base_bdevs": 4, 00:13:39.973 "num_base_bdevs_discovered": 1, 00:13:39.973 "num_base_bdevs_operational": 4, 00:13:39.973 "base_bdevs_list": [ 00:13:39.973 { 00:13:39.973 "name": "BaseBdev1", 00:13:39.973 "uuid": "82260e74-5c8d-40fc-8eaa-4830c28b2565", 00:13:39.973 "is_configured": true, 00:13:39.973 "data_offset": 0, 00:13:39.973 "data_size": 65536 00:13:39.973 }, 00:13:39.973 { 00:13:39.973 "name": "BaseBdev2", 00:13:39.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.973 "is_configured": false, 00:13:39.973 "data_offset": 0, 00:13:39.973 "data_size": 0 00:13:39.973 }, 00:13:39.973 { 00:13:39.973 "name": "BaseBdev3", 00:13:39.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.973 "is_configured": false, 00:13:39.973 "data_offset": 0, 00:13:39.973 "data_size": 0 00:13:39.973 }, 00:13:39.973 { 00:13:39.973 "name": "BaseBdev4", 00:13:39.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.973 "is_configured": false, 00:13:39.973 "data_offset": 0, 00:13:39.973 "data_size": 0 00:13:39.973 } 00:13:39.973 ] 00:13:39.973 }' 00:13:39.973 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.973 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.233 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:40.233 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.234 [2024-12-06 11:56:37.879404] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:40.234 BaseBdev2 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.234 [ 00:13:40.234 { 00:13:40.234 "name": "BaseBdev2", 00:13:40.234 "aliases": [ 00:13:40.234 "0fce05c0-c506-4a88-905a-3dc38a2f7b30" 00:13:40.234 ], 00:13:40.234 "product_name": "Malloc disk", 00:13:40.234 "block_size": 512, 00:13:40.234 "num_blocks": 65536, 00:13:40.234 "uuid": "0fce05c0-c506-4a88-905a-3dc38a2f7b30", 00:13:40.234 "assigned_rate_limits": { 00:13:40.234 "rw_ios_per_sec": 0, 00:13:40.234 "rw_mbytes_per_sec": 0, 00:13:40.234 "r_mbytes_per_sec": 0, 00:13:40.234 "w_mbytes_per_sec": 0 00:13:40.234 }, 00:13:40.234 "claimed": true, 00:13:40.234 "claim_type": "exclusive_write", 00:13:40.234 "zoned": false, 00:13:40.234 "supported_io_types": { 00:13:40.234 "read": true, 00:13:40.234 "write": true, 00:13:40.234 "unmap": true, 00:13:40.234 "flush": true, 00:13:40.234 "reset": true, 00:13:40.234 "nvme_admin": false, 00:13:40.234 "nvme_io": false, 00:13:40.234 "nvme_io_md": false, 00:13:40.234 "write_zeroes": true, 00:13:40.234 "zcopy": true, 00:13:40.234 "get_zone_info": false, 00:13:40.234 "zone_management": false, 00:13:40.234 "zone_append": false, 00:13:40.234 "compare": false, 00:13:40.234 "compare_and_write": false, 00:13:40.234 "abort": true, 00:13:40.234 "seek_hole": false, 00:13:40.234 "seek_data": false, 00:13:40.234 "copy": true, 00:13:40.234 "nvme_iov_md": false 00:13:40.234 }, 00:13:40.234 "memory_domains": [ 00:13:40.234 { 00:13:40.234 "dma_device_id": "system", 00:13:40.234 "dma_device_type": 1 00:13:40.234 }, 00:13:40.234 { 00:13:40.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.234 "dma_device_type": 2 00:13:40.234 } 00:13:40.234 ], 00:13:40.234 "driver_specific": {} 00:13:40.234 } 00:13:40.234 ] 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.234 "name": "Existed_Raid", 00:13:40.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.234 "strip_size_kb": 64, 00:13:40.234 "state": "configuring", 00:13:40.234 "raid_level": "raid5f", 00:13:40.234 "superblock": false, 00:13:40.234 "num_base_bdevs": 4, 00:13:40.234 "num_base_bdevs_discovered": 2, 00:13:40.234 "num_base_bdevs_operational": 4, 00:13:40.234 "base_bdevs_list": [ 00:13:40.234 { 00:13:40.234 "name": "BaseBdev1", 00:13:40.234 "uuid": "82260e74-5c8d-40fc-8eaa-4830c28b2565", 00:13:40.234 "is_configured": true, 00:13:40.234 "data_offset": 0, 00:13:40.234 "data_size": 65536 00:13:40.234 }, 00:13:40.234 { 00:13:40.234 "name": "BaseBdev2", 00:13:40.234 "uuid": "0fce05c0-c506-4a88-905a-3dc38a2f7b30", 00:13:40.234 "is_configured": true, 00:13:40.234 "data_offset": 0, 00:13:40.234 "data_size": 65536 00:13:40.234 }, 00:13:40.234 { 00:13:40.234 "name": "BaseBdev3", 00:13:40.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.234 "is_configured": false, 00:13:40.234 "data_offset": 0, 00:13:40.234 "data_size": 0 00:13:40.234 }, 00:13:40.234 { 00:13:40.234 "name": "BaseBdev4", 00:13:40.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.234 "is_configured": false, 00:13:40.234 "data_offset": 0, 00:13:40.234 "data_size": 0 00:13:40.234 } 00:13:40.234 ] 00:13:40.234 }' 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.234 11:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.805 [2024-12-06 11:56:38.413117] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:40.805 BaseBdev3 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.805 [ 00:13:40.805 { 00:13:40.805 "name": "BaseBdev3", 00:13:40.805 "aliases": [ 00:13:40.805 "caa5707e-b0cb-446c-a3ce-8b068bd2491c" 00:13:40.805 ], 00:13:40.805 "product_name": "Malloc disk", 00:13:40.805 "block_size": 512, 00:13:40.805 "num_blocks": 65536, 00:13:40.805 "uuid": "caa5707e-b0cb-446c-a3ce-8b068bd2491c", 00:13:40.805 "assigned_rate_limits": { 00:13:40.805 "rw_ios_per_sec": 0, 00:13:40.805 "rw_mbytes_per_sec": 0, 00:13:40.805 "r_mbytes_per_sec": 0, 00:13:40.805 "w_mbytes_per_sec": 0 00:13:40.805 }, 00:13:40.805 "claimed": true, 00:13:40.805 "claim_type": "exclusive_write", 00:13:40.805 "zoned": false, 00:13:40.805 "supported_io_types": { 00:13:40.805 "read": true, 00:13:40.805 "write": true, 00:13:40.805 "unmap": true, 00:13:40.805 "flush": true, 00:13:40.805 "reset": true, 00:13:40.805 "nvme_admin": false, 00:13:40.805 "nvme_io": false, 00:13:40.805 "nvme_io_md": false, 00:13:40.805 "write_zeroes": true, 00:13:40.805 "zcopy": true, 00:13:40.805 "get_zone_info": false, 00:13:40.805 "zone_management": false, 00:13:40.805 "zone_append": false, 00:13:40.805 "compare": false, 00:13:40.805 "compare_and_write": false, 00:13:40.805 "abort": true, 00:13:40.805 "seek_hole": false, 00:13:40.805 "seek_data": false, 00:13:40.805 "copy": true, 00:13:40.805 "nvme_iov_md": false 00:13:40.805 }, 00:13:40.805 "memory_domains": [ 00:13:40.805 { 00:13:40.805 "dma_device_id": "system", 00:13:40.805 "dma_device_type": 1 00:13:40.805 }, 00:13:40.805 { 00:13:40.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.805 "dma_device_type": 2 00:13:40.805 } 00:13:40.805 ], 00:13:40.805 "driver_specific": {} 00:13:40.805 } 00:13:40.805 ] 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.805 "name": "Existed_Raid", 00:13:40.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.805 "strip_size_kb": 64, 00:13:40.805 "state": "configuring", 00:13:40.805 "raid_level": "raid5f", 00:13:40.805 "superblock": false, 00:13:40.805 "num_base_bdevs": 4, 00:13:40.805 "num_base_bdevs_discovered": 3, 00:13:40.805 "num_base_bdevs_operational": 4, 00:13:40.805 "base_bdevs_list": [ 00:13:40.805 { 00:13:40.805 "name": "BaseBdev1", 00:13:40.805 "uuid": "82260e74-5c8d-40fc-8eaa-4830c28b2565", 00:13:40.805 "is_configured": true, 00:13:40.805 "data_offset": 0, 00:13:40.805 "data_size": 65536 00:13:40.805 }, 00:13:40.805 { 00:13:40.805 "name": "BaseBdev2", 00:13:40.805 "uuid": "0fce05c0-c506-4a88-905a-3dc38a2f7b30", 00:13:40.805 "is_configured": true, 00:13:40.805 "data_offset": 0, 00:13:40.805 "data_size": 65536 00:13:40.805 }, 00:13:40.805 { 00:13:40.805 "name": "BaseBdev3", 00:13:40.805 "uuid": "caa5707e-b0cb-446c-a3ce-8b068bd2491c", 00:13:40.805 "is_configured": true, 00:13:40.805 "data_offset": 0, 00:13:40.805 "data_size": 65536 00:13:40.805 }, 00:13:40.805 { 00:13:40.805 "name": "BaseBdev4", 00:13:40.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.805 "is_configured": false, 00:13:40.805 "data_offset": 0, 00:13:40.805 "data_size": 0 00:13:40.805 } 00:13:40.805 ] 00:13:40.805 }' 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.805 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.376 [2024-12-06 11:56:38.871298] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:41.376 [2024-12-06 11:56:38.871439] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:13:41.376 [2024-12-06 11:56:38.871474] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:41.376 [2024-12-06 11:56:38.871753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:41.376 [2024-12-06 11:56:38.872219] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:13:41.376 [2024-12-06 11:56:38.872287] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:13:41.376 [2024-12-06 11:56:38.872519] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.376 BaseBdev4 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.376 [ 00:13:41.376 { 00:13:41.376 "name": "BaseBdev4", 00:13:41.376 "aliases": [ 00:13:41.376 "0ea6aa51-11b1-426f-8ffc-5106feee84d7" 00:13:41.376 ], 00:13:41.376 "product_name": "Malloc disk", 00:13:41.376 "block_size": 512, 00:13:41.376 "num_blocks": 65536, 00:13:41.376 "uuid": "0ea6aa51-11b1-426f-8ffc-5106feee84d7", 00:13:41.376 "assigned_rate_limits": { 00:13:41.376 "rw_ios_per_sec": 0, 00:13:41.376 "rw_mbytes_per_sec": 0, 00:13:41.376 "r_mbytes_per_sec": 0, 00:13:41.376 "w_mbytes_per_sec": 0 00:13:41.376 }, 00:13:41.376 "claimed": true, 00:13:41.376 "claim_type": "exclusive_write", 00:13:41.376 "zoned": false, 00:13:41.376 "supported_io_types": { 00:13:41.376 "read": true, 00:13:41.376 "write": true, 00:13:41.376 "unmap": true, 00:13:41.376 "flush": true, 00:13:41.376 "reset": true, 00:13:41.376 "nvme_admin": false, 00:13:41.376 "nvme_io": false, 00:13:41.376 "nvme_io_md": false, 00:13:41.376 "write_zeroes": true, 00:13:41.376 "zcopy": true, 00:13:41.376 "get_zone_info": false, 00:13:41.376 "zone_management": false, 00:13:41.376 "zone_append": false, 00:13:41.376 "compare": false, 00:13:41.376 "compare_and_write": false, 00:13:41.376 "abort": true, 00:13:41.376 "seek_hole": false, 00:13:41.376 "seek_data": false, 00:13:41.376 "copy": true, 00:13:41.376 "nvme_iov_md": false 00:13:41.376 }, 00:13:41.376 "memory_domains": [ 00:13:41.376 { 00:13:41.376 "dma_device_id": "system", 00:13:41.376 "dma_device_type": 1 00:13:41.376 }, 00:13:41.376 { 00:13:41.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.376 "dma_device_type": 2 00:13:41.376 } 00:13:41.376 ], 00:13:41.376 "driver_specific": {} 00:13:41.376 } 00:13:41.376 ] 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.376 "name": "Existed_Raid", 00:13:41.376 "uuid": "ffa49c4d-e9da-47b3-9002-b9d5fa607939", 00:13:41.376 "strip_size_kb": 64, 00:13:41.376 "state": "online", 00:13:41.376 "raid_level": "raid5f", 00:13:41.376 "superblock": false, 00:13:41.376 "num_base_bdevs": 4, 00:13:41.376 "num_base_bdevs_discovered": 4, 00:13:41.376 "num_base_bdevs_operational": 4, 00:13:41.376 "base_bdevs_list": [ 00:13:41.376 { 00:13:41.376 "name": "BaseBdev1", 00:13:41.376 "uuid": "82260e74-5c8d-40fc-8eaa-4830c28b2565", 00:13:41.376 "is_configured": true, 00:13:41.376 "data_offset": 0, 00:13:41.376 "data_size": 65536 00:13:41.376 }, 00:13:41.376 { 00:13:41.376 "name": "BaseBdev2", 00:13:41.376 "uuid": "0fce05c0-c506-4a88-905a-3dc38a2f7b30", 00:13:41.376 "is_configured": true, 00:13:41.376 "data_offset": 0, 00:13:41.376 "data_size": 65536 00:13:41.376 }, 00:13:41.376 { 00:13:41.376 "name": "BaseBdev3", 00:13:41.376 "uuid": "caa5707e-b0cb-446c-a3ce-8b068bd2491c", 00:13:41.376 "is_configured": true, 00:13:41.376 "data_offset": 0, 00:13:41.376 "data_size": 65536 00:13:41.376 }, 00:13:41.376 { 00:13:41.376 "name": "BaseBdev4", 00:13:41.376 "uuid": "0ea6aa51-11b1-426f-8ffc-5106feee84d7", 00:13:41.376 "is_configured": true, 00:13:41.376 "data_offset": 0, 00:13:41.376 "data_size": 65536 00:13:41.376 } 00:13:41.376 ] 00:13:41.376 }' 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.376 11:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.637 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:41.637 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:41.637 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:41.637 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:41.637 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:41.637 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:41.637 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:41.637 11:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.637 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:41.637 11:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.637 [2024-12-06 11:56:39.378598] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:41.898 "name": "Existed_Raid", 00:13:41.898 "aliases": [ 00:13:41.898 "ffa49c4d-e9da-47b3-9002-b9d5fa607939" 00:13:41.898 ], 00:13:41.898 "product_name": "Raid Volume", 00:13:41.898 "block_size": 512, 00:13:41.898 "num_blocks": 196608, 00:13:41.898 "uuid": "ffa49c4d-e9da-47b3-9002-b9d5fa607939", 00:13:41.898 "assigned_rate_limits": { 00:13:41.898 "rw_ios_per_sec": 0, 00:13:41.898 "rw_mbytes_per_sec": 0, 00:13:41.898 "r_mbytes_per_sec": 0, 00:13:41.898 "w_mbytes_per_sec": 0 00:13:41.898 }, 00:13:41.898 "claimed": false, 00:13:41.898 "zoned": false, 00:13:41.898 "supported_io_types": { 00:13:41.898 "read": true, 00:13:41.898 "write": true, 00:13:41.898 "unmap": false, 00:13:41.898 "flush": false, 00:13:41.898 "reset": true, 00:13:41.898 "nvme_admin": false, 00:13:41.898 "nvme_io": false, 00:13:41.898 "nvme_io_md": false, 00:13:41.898 "write_zeroes": true, 00:13:41.898 "zcopy": false, 00:13:41.898 "get_zone_info": false, 00:13:41.898 "zone_management": false, 00:13:41.898 "zone_append": false, 00:13:41.898 "compare": false, 00:13:41.898 "compare_and_write": false, 00:13:41.898 "abort": false, 00:13:41.898 "seek_hole": false, 00:13:41.898 "seek_data": false, 00:13:41.898 "copy": false, 00:13:41.898 "nvme_iov_md": false 00:13:41.898 }, 00:13:41.898 "driver_specific": { 00:13:41.898 "raid": { 00:13:41.898 "uuid": "ffa49c4d-e9da-47b3-9002-b9d5fa607939", 00:13:41.898 "strip_size_kb": 64, 00:13:41.898 "state": "online", 00:13:41.898 "raid_level": "raid5f", 00:13:41.898 "superblock": false, 00:13:41.898 "num_base_bdevs": 4, 00:13:41.898 "num_base_bdevs_discovered": 4, 00:13:41.898 "num_base_bdevs_operational": 4, 00:13:41.898 "base_bdevs_list": [ 00:13:41.898 { 00:13:41.898 "name": "BaseBdev1", 00:13:41.898 "uuid": "82260e74-5c8d-40fc-8eaa-4830c28b2565", 00:13:41.898 "is_configured": true, 00:13:41.898 "data_offset": 0, 00:13:41.898 "data_size": 65536 00:13:41.898 }, 00:13:41.898 { 00:13:41.898 "name": "BaseBdev2", 00:13:41.898 "uuid": "0fce05c0-c506-4a88-905a-3dc38a2f7b30", 00:13:41.898 "is_configured": true, 00:13:41.898 "data_offset": 0, 00:13:41.898 "data_size": 65536 00:13:41.898 }, 00:13:41.898 { 00:13:41.898 "name": "BaseBdev3", 00:13:41.898 "uuid": "caa5707e-b0cb-446c-a3ce-8b068bd2491c", 00:13:41.898 "is_configured": true, 00:13:41.898 "data_offset": 0, 00:13:41.898 "data_size": 65536 00:13:41.898 }, 00:13:41.898 { 00:13:41.898 "name": "BaseBdev4", 00:13:41.898 "uuid": "0ea6aa51-11b1-426f-8ffc-5106feee84d7", 00:13:41.898 "is_configured": true, 00:13:41.898 "data_offset": 0, 00:13:41.898 "data_size": 65536 00:13:41.898 } 00:13:41.898 ] 00:13:41.898 } 00:13:41.898 } 00:13:41.898 }' 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:41.898 BaseBdev2 00:13:41.898 BaseBdev3 00:13:41.898 BaseBdev4' 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.898 [2024-12-06 11:56:39.638005] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:41.898 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.159 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:42.159 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.159 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.159 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.159 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.159 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.159 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.159 11:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.159 11:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.159 11:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.159 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.159 "name": "Existed_Raid", 00:13:42.159 "uuid": "ffa49c4d-e9da-47b3-9002-b9d5fa607939", 00:13:42.159 "strip_size_kb": 64, 00:13:42.159 "state": "online", 00:13:42.159 "raid_level": "raid5f", 00:13:42.159 "superblock": false, 00:13:42.159 "num_base_bdevs": 4, 00:13:42.159 "num_base_bdevs_discovered": 3, 00:13:42.159 "num_base_bdevs_operational": 3, 00:13:42.159 "base_bdevs_list": [ 00:13:42.159 { 00:13:42.159 "name": null, 00:13:42.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.159 "is_configured": false, 00:13:42.159 "data_offset": 0, 00:13:42.159 "data_size": 65536 00:13:42.159 }, 00:13:42.159 { 00:13:42.159 "name": "BaseBdev2", 00:13:42.159 "uuid": "0fce05c0-c506-4a88-905a-3dc38a2f7b30", 00:13:42.159 "is_configured": true, 00:13:42.159 "data_offset": 0, 00:13:42.159 "data_size": 65536 00:13:42.159 }, 00:13:42.159 { 00:13:42.159 "name": "BaseBdev3", 00:13:42.159 "uuid": "caa5707e-b0cb-446c-a3ce-8b068bd2491c", 00:13:42.159 "is_configured": true, 00:13:42.159 "data_offset": 0, 00:13:42.159 "data_size": 65536 00:13:42.159 }, 00:13:42.159 { 00:13:42.159 "name": "BaseBdev4", 00:13:42.159 "uuid": "0ea6aa51-11b1-426f-8ffc-5106feee84d7", 00:13:42.159 "is_configured": true, 00:13:42.159 "data_offset": 0, 00:13:42.159 "data_size": 65536 00:13:42.159 } 00:13:42.159 ] 00:13:42.159 }' 00:13:42.159 11:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.159 11:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.419 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:42.419 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:42.419 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.419 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:42.419 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.419 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.419 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.419 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:42.419 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:42.419 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:42.419 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.419 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.419 [2024-12-06 11:56:40.156277] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:42.419 [2024-12-06 11:56:40.156408] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:42.419 [2024-12-06 11:56:40.167580] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:42.419 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.419 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:42.419 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.680 [2024-12-06 11:56:40.227496] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.680 [2024-12-06 11:56:40.298438] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:42.680 [2024-12-06 11:56:40.298525] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.680 BaseBdev2 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.680 [ 00:13:42.680 { 00:13:42.680 "name": "BaseBdev2", 00:13:42.680 "aliases": [ 00:13:42.680 "ae843e58-6178-4303-b293-19297ca63253" 00:13:42.680 ], 00:13:42.680 "product_name": "Malloc disk", 00:13:42.680 "block_size": 512, 00:13:42.680 "num_blocks": 65536, 00:13:42.680 "uuid": "ae843e58-6178-4303-b293-19297ca63253", 00:13:42.680 "assigned_rate_limits": { 00:13:42.680 "rw_ios_per_sec": 0, 00:13:42.680 "rw_mbytes_per_sec": 0, 00:13:42.680 "r_mbytes_per_sec": 0, 00:13:42.680 "w_mbytes_per_sec": 0 00:13:42.680 }, 00:13:42.680 "claimed": false, 00:13:42.680 "zoned": false, 00:13:42.680 "supported_io_types": { 00:13:42.680 "read": true, 00:13:42.680 "write": true, 00:13:42.680 "unmap": true, 00:13:42.680 "flush": true, 00:13:42.680 "reset": true, 00:13:42.680 "nvme_admin": false, 00:13:42.680 "nvme_io": false, 00:13:42.680 "nvme_io_md": false, 00:13:42.680 "write_zeroes": true, 00:13:42.680 "zcopy": true, 00:13:42.680 "get_zone_info": false, 00:13:42.680 "zone_management": false, 00:13:42.680 "zone_append": false, 00:13:42.680 "compare": false, 00:13:42.680 "compare_and_write": false, 00:13:42.680 "abort": true, 00:13:42.680 "seek_hole": false, 00:13:42.680 "seek_data": false, 00:13:42.680 "copy": true, 00:13:42.680 "nvme_iov_md": false 00:13:42.680 }, 00:13:42.680 "memory_domains": [ 00:13:42.680 { 00:13:42.680 "dma_device_id": "system", 00:13:42.680 "dma_device_type": 1 00:13:42.680 }, 00:13:42.680 { 00:13:42.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.680 "dma_device_type": 2 00:13:42.680 } 00:13:42.680 ], 00:13:42.680 "driver_specific": {} 00:13:42.680 } 00:13:42.680 ] 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.680 BaseBdev3 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:42.680 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:42.941 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:42.941 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:42.941 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.942 [ 00:13:42.942 { 00:13:42.942 "name": "BaseBdev3", 00:13:42.942 "aliases": [ 00:13:42.942 "c061642a-ff8d-4505-be20-04aa86930988" 00:13:42.942 ], 00:13:42.942 "product_name": "Malloc disk", 00:13:42.942 "block_size": 512, 00:13:42.942 "num_blocks": 65536, 00:13:42.942 "uuid": "c061642a-ff8d-4505-be20-04aa86930988", 00:13:42.942 "assigned_rate_limits": { 00:13:42.942 "rw_ios_per_sec": 0, 00:13:42.942 "rw_mbytes_per_sec": 0, 00:13:42.942 "r_mbytes_per_sec": 0, 00:13:42.942 "w_mbytes_per_sec": 0 00:13:42.942 }, 00:13:42.942 "claimed": false, 00:13:42.942 "zoned": false, 00:13:42.942 "supported_io_types": { 00:13:42.942 "read": true, 00:13:42.942 "write": true, 00:13:42.942 "unmap": true, 00:13:42.942 "flush": true, 00:13:42.942 "reset": true, 00:13:42.942 "nvme_admin": false, 00:13:42.942 "nvme_io": false, 00:13:42.942 "nvme_io_md": false, 00:13:42.942 "write_zeroes": true, 00:13:42.942 "zcopy": true, 00:13:42.942 "get_zone_info": false, 00:13:42.942 "zone_management": false, 00:13:42.942 "zone_append": false, 00:13:42.942 "compare": false, 00:13:42.942 "compare_and_write": false, 00:13:42.942 "abort": true, 00:13:42.942 "seek_hole": false, 00:13:42.942 "seek_data": false, 00:13:42.942 "copy": true, 00:13:42.942 "nvme_iov_md": false 00:13:42.942 }, 00:13:42.942 "memory_domains": [ 00:13:42.942 { 00:13:42.942 "dma_device_id": "system", 00:13:42.942 "dma_device_type": 1 00:13:42.942 }, 00:13:42.942 { 00:13:42.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.942 "dma_device_type": 2 00:13:42.942 } 00:13:42.942 ], 00:13:42.942 "driver_specific": {} 00:13:42.942 } 00:13:42.942 ] 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.942 BaseBdev4 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.942 [ 00:13:42.942 { 00:13:42.942 "name": "BaseBdev4", 00:13:42.942 "aliases": [ 00:13:42.942 "61ffaddf-4b93-48e3-92be-b9cff9f6ae68" 00:13:42.942 ], 00:13:42.942 "product_name": "Malloc disk", 00:13:42.942 "block_size": 512, 00:13:42.942 "num_blocks": 65536, 00:13:42.942 "uuid": "61ffaddf-4b93-48e3-92be-b9cff9f6ae68", 00:13:42.942 "assigned_rate_limits": { 00:13:42.942 "rw_ios_per_sec": 0, 00:13:42.942 "rw_mbytes_per_sec": 0, 00:13:42.942 "r_mbytes_per_sec": 0, 00:13:42.942 "w_mbytes_per_sec": 0 00:13:42.942 }, 00:13:42.942 "claimed": false, 00:13:42.942 "zoned": false, 00:13:42.942 "supported_io_types": { 00:13:42.942 "read": true, 00:13:42.942 "write": true, 00:13:42.942 "unmap": true, 00:13:42.942 "flush": true, 00:13:42.942 "reset": true, 00:13:42.942 "nvme_admin": false, 00:13:42.942 "nvme_io": false, 00:13:42.942 "nvme_io_md": false, 00:13:42.942 "write_zeroes": true, 00:13:42.942 "zcopy": true, 00:13:42.942 "get_zone_info": false, 00:13:42.942 "zone_management": false, 00:13:42.942 "zone_append": false, 00:13:42.942 "compare": false, 00:13:42.942 "compare_and_write": false, 00:13:42.942 "abort": true, 00:13:42.942 "seek_hole": false, 00:13:42.942 "seek_data": false, 00:13:42.942 "copy": true, 00:13:42.942 "nvme_iov_md": false 00:13:42.942 }, 00:13:42.942 "memory_domains": [ 00:13:42.942 { 00:13:42.942 "dma_device_id": "system", 00:13:42.942 "dma_device_type": 1 00:13:42.942 }, 00:13:42.942 { 00:13:42.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.942 "dma_device_type": 2 00:13:42.942 } 00:13:42.942 ], 00:13:42.942 "driver_specific": {} 00:13:42.942 } 00:13:42.942 ] 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.942 [2024-12-06 11:56:40.529044] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:42.942 [2024-12-06 11:56:40.529155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:42.942 [2024-12-06 11:56:40.529202] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:42.942 [2024-12-06 11:56:40.530977] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:42.942 [2024-12-06 11:56:40.531021] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.942 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.942 "name": "Existed_Raid", 00:13:42.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.942 "strip_size_kb": 64, 00:13:42.942 "state": "configuring", 00:13:42.942 "raid_level": "raid5f", 00:13:42.942 "superblock": false, 00:13:42.942 "num_base_bdevs": 4, 00:13:42.942 "num_base_bdevs_discovered": 3, 00:13:42.942 "num_base_bdevs_operational": 4, 00:13:42.942 "base_bdevs_list": [ 00:13:42.942 { 00:13:42.942 "name": "BaseBdev1", 00:13:42.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.942 "is_configured": false, 00:13:42.942 "data_offset": 0, 00:13:42.942 "data_size": 0 00:13:42.942 }, 00:13:42.942 { 00:13:42.942 "name": "BaseBdev2", 00:13:42.942 "uuid": "ae843e58-6178-4303-b293-19297ca63253", 00:13:42.942 "is_configured": true, 00:13:42.942 "data_offset": 0, 00:13:42.942 "data_size": 65536 00:13:42.942 }, 00:13:42.942 { 00:13:42.943 "name": "BaseBdev3", 00:13:42.943 "uuid": "c061642a-ff8d-4505-be20-04aa86930988", 00:13:42.943 "is_configured": true, 00:13:42.943 "data_offset": 0, 00:13:42.943 "data_size": 65536 00:13:42.943 }, 00:13:42.943 { 00:13:42.943 "name": "BaseBdev4", 00:13:42.943 "uuid": "61ffaddf-4b93-48e3-92be-b9cff9f6ae68", 00:13:42.943 "is_configured": true, 00:13:42.943 "data_offset": 0, 00:13:42.943 "data_size": 65536 00:13:42.943 } 00:13:42.943 ] 00:13:42.943 }' 00:13:42.943 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.943 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.513 11:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:43.513 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.513 11:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.513 [2024-12-06 11:56:41.004243] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:43.513 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.513 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:43.513 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.513 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.513 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:43.513 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.513 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:43.513 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.513 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.513 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.513 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.513 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.513 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.513 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.513 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.513 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.513 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.513 "name": "Existed_Raid", 00:13:43.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.513 "strip_size_kb": 64, 00:13:43.513 "state": "configuring", 00:13:43.513 "raid_level": "raid5f", 00:13:43.513 "superblock": false, 00:13:43.513 "num_base_bdevs": 4, 00:13:43.513 "num_base_bdevs_discovered": 2, 00:13:43.513 "num_base_bdevs_operational": 4, 00:13:43.513 "base_bdevs_list": [ 00:13:43.513 { 00:13:43.513 "name": "BaseBdev1", 00:13:43.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.513 "is_configured": false, 00:13:43.513 "data_offset": 0, 00:13:43.513 "data_size": 0 00:13:43.513 }, 00:13:43.513 { 00:13:43.513 "name": null, 00:13:43.513 "uuid": "ae843e58-6178-4303-b293-19297ca63253", 00:13:43.513 "is_configured": false, 00:13:43.513 "data_offset": 0, 00:13:43.513 "data_size": 65536 00:13:43.513 }, 00:13:43.513 { 00:13:43.513 "name": "BaseBdev3", 00:13:43.513 "uuid": "c061642a-ff8d-4505-be20-04aa86930988", 00:13:43.513 "is_configured": true, 00:13:43.513 "data_offset": 0, 00:13:43.513 "data_size": 65536 00:13:43.513 }, 00:13:43.513 { 00:13:43.513 "name": "BaseBdev4", 00:13:43.513 "uuid": "61ffaddf-4b93-48e3-92be-b9cff9f6ae68", 00:13:43.513 "is_configured": true, 00:13:43.513 "data_offset": 0, 00:13:43.513 "data_size": 65536 00:13:43.513 } 00:13:43.513 ] 00:13:43.513 }' 00:13:43.513 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.513 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.774 [2024-12-06 11:56:41.478217] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:43.774 BaseBdev1 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.774 [ 00:13:43.774 { 00:13:43.774 "name": "BaseBdev1", 00:13:43.774 "aliases": [ 00:13:43.774 "3c34e455-9b4b-4adc-a7c5-063453416340" 00:13:43.774 ], 00:13:43.774 "product_name": "Malloc disk", 00:13:43.774 "block_size": 512, 00:13:43.774 "num_blocks": 65536, 00:13:43.774 "uuid": "3c34e455-9b4b-4adc-a7c5-063453416340", 00:13:43.774 "assigned_rate_limits": { 00:13:43.774 "rw_ios_per_sec": 0, 00:13:43.774 "rw_mbytes_per_sec": 0, 00:13:43.774 "r_mbytes_per_sec": 0, 00:13:43.774 "w_mbytes_per_sec": 0 00:13:43.774 }, 00:13:43.774 "claimed": true, 00:13:43.774 "claim_type": "exclusive_write", 00:13:43.774 "zoned": false, 00:13:43.774 "supported_io_types": { 00:13:43.774 "read": true, 00:13:43.774 "write": true, 00:13:43.774 "unmap": true, 00:13:43.774 "flush": true, 00:13:43.774 "reset": true, 00:13:43.774 "nvme_admin": false, 00:13:43.774 "nvme_io": false, 00:13:43.774 "nvme_io_md": false, 00:13:43.774 "write_zeroes": true, 00:13:43.774 "zcopy": true, 00:13:43.774 "get_zone_info": false, 00:13:43.774 "zone_management": false, 00:13:43.774 "zone_append": false, 00:13:43.774 "compare": false, 00:13:43.774 "compare_and_write": false, 00:13:43.774 "abort": true, 00:13:43.774 "seek_hole": false, 00:13:43.774 "seek_data": false, 00:13:43.774 "copy": true, 00:13:43.774 "nvme_iov_md": false 00:13:43.774 }, 00:13:43.774 "memory_domains": [ 00:13:43.774 { 00:13:43.774 "dma_device_id": "system", 00:13:43.774 "dma_device_type": 1 00:13:43.774 }, 00:13:43.774 { 00:13:43.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.774 "dma_device_type": 2 00:13:43.774 } 00:13:43.774 ], 00:13:43.774 "driver_specific": {} 00:13:43.774 } 00:13:43.774 ] 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.774 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.034 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.034 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.034 "name": "Existed_Raid", 00:13:44.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.034 "strip_size_kb": 64, 00:13:44.034 "state": "configuring", 00:13:44.034 "raid_level": "raid5f", 00:13:44.034 "superblock": false, 00:13:44.034 "num_base_bdevs": 4, 00:13:44.034 "num_base_bdevs_discovered": 3, 00:13:44.034 "num_base_bdevs_operational": 4, 00:13:44.034 "base_bdevs_list": [ 00:13:44.034 { 00:13:44.034 "name": "BaseBdev1", 00:13:44.034 "uuid": "3c34e455-9b4b-4adc-a7c5-063453416340", 00:13:44.034 "is_configured": true, 00:13:44.034 "data_offset": 0, 00:13:44.034 "data_size": 65536 00:13:44.034 }, 00:13:44.034 { 00:13:44.034 "name": null, 00:13:44.034 "uuid": "ae843e58-6178-4303-b293-19297ca63253", 00:13:44.034 "is_configured": false, 00:13:44.034 "data_offset": 0, 00:13:44.034 "data_size": 65536 00:13:44.034 }, 00:13:44.034 { 00:13:44.034 "name": "BaseBdev3", 00:13:44.034 "uuid": "c061642a-ff8d-4505-be20-04aa86930988", 00:13:44.034 "is_configured": true, 00:13:44.034 "data_offset": 0, 00:13:44.034 "data_size": 65536 00:13:44.034 }, 00:13:44.034 { 00:13:44.034 "name": "BaseBdev4", 00:13:44.034 "uuid": "61ffaddf-4b93-48e3-92be-b9cff9f6ae68", 00:13:44.034 "is_configured": true, 00:13:44.034 "data_offset": 0, 00:13:44.034 "data_size": 65536 00:13:44.034 } 00:13:44.034 ] 00:13:44.034 }' 00:13:44.034 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.034 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.293 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.294 11:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:44.294 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.294 11:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.294 11:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.294 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:44.294 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:44.294 11:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.294 11:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.294 [2024-12-06 11:56:42.045289] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:44.553 11:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.553 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:44.553 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.553 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.553 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:44.553 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.553 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:44.553 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.553 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.553 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.553 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.553 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.553 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.553 11:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.553 11:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.553 11:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.553 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.553 "name": "Existed_Raid", 00:13:44.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.553 "strip_size_kb": 64, 00:13:44.553 "state": "configuring", 00:13:44.553 "raid_level": "raid5f", 00:13:44.553 "superblock": false, 00:13:44.553 "num_base_bdevs": 4, 00:13:44.553 "num_base_bdevs_discovered": 2, 00:13:44.553 "num_base_bdevs_operational": 4, 00:13:44.553 "base_bdevs_list": [ 00:13:44.553 { 00:13:44.553 "name": "BaseBdev1", 00:13:44.553 "uuid": "3c34e455-9b4b-4adc-a7c5-063453416340", 00:13:44.553 "is_configured": true, 00:13:44.553 "data_offset": 0, 00:13:44.553 "data_size": 65536 00:13:44.553 }, 00:13:44.553 { 00:13:44.553 "name": null, 00:13:44.553 "uuid": "ae843e58-6178-4303-b293-19297ca63253", 00:13:44.553 "is_configured": false, 00:13:44.553 "data_offset": 0, 00:13:44.553 "data_size": 65536 00:13:44.553 }, 00:13:44.553 { 00:13:44.553 "name": null, 00:13:44.553 "uuid": "c061642a-ff8d-4505-be20-04aa86930988", 00:13:44.553 "is_configured": false, 00:13:44.553 "data_offset": 0, 00:13:44.553 "data_size": 65536 00:13:44.553 }, 00:13:44.553 { 00:13:44.553 "name": "BaseBdev4", 00:13:44.553 "uuid": "61ffaddf-4b93-48e3-92be-b9cff9f6ae68", 00:13:44.553 "is_configured": true, 00:13:44.553 "data_offset": 0, 00:13:44.553 "data_size": 65536 00:13:44.553 } 00:13:44.553 ] 00:13:44.553 }' 00:13:44.553 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.553 11:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.813 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.813 11:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.813 11:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.813 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:44.813 11:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.813 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:44.813 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:44.813 11:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.813 11:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.813 [2024-12-06 11:56:42.520480] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:44.813 11:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.813 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:44.813 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.813 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.813 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:44.813 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.813 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:44.813 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.813 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.813 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.813 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.813 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.813 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.814 11:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.814 11:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.814 11:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.814 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.814 "name": "Existed_Raid", 00:13:44.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.814 "strip_size_kb": 64, 00:13:44.814 "state": "configuring", 00:13:44.814 "raid_level": "raid5f", 00:13:44.814 "superblock": false, 00:13:44.814 "num_base_bdevs": 4, 00:13:44.814 "num_base_bdevs_discovered": 3, 00:13:44.814 "num_base_bdevs_operational": 4, 00:13:44.814 "base_bdevs_list": [ 00:13:44.814 { 00:13:44.814 "name": "BaseBdev1", 00:13:44.814 "uuid": "3c34e455-9b4b-4adc-a7c5-063453416340", 00:13:44.814 "is_configured": true, 00:13:44.814 "data_offset": 0, 00:13:44.814 "data_size": 65536 00:13:44.814 }, 00:13:44.814 { 00:13:44.814 "name": null, 00:13:44.814 "uuid": "ae843e58-6178-4303-b293-19297ca63253", 00:13:44.814 "is_configured": false, 00:13:44.814 "data_offset": 0, 00:13:44.814 "data_size": 65536 00:13:44.814 }, 00:13:44.814 { 00:13:44.814 "name": "BaseBdev3", 00:13:44.814 "uuid": "c061642a-ff8d-4505-be20-04aa86930988", 00:13:44.814 "is_configured": true, 00:13:44.814 "data_offset": 0, 00:13:44.814 "data_size": 65536 00:13:44.814 }, 00:13:44.814 { 00:13:44.814 "name": "BaseBdev4", 00:13:44.814 "uuid": "61ffaddf-4b93-48e3-92be-b9cff9f6ae68", 00:13:44.814 "is_configured": true, 00:13:44.814 "data_offset": 0, 00:13:44.814 "data_size": 65536 00:13:44.814 } 00:13:44.814 ] 00:13:44.814 }' 00:13:44.814 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.814 11:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.384 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.384 11:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.384 11:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.384 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:45.384 11:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.384 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:45.384 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:45.384 11:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.384 11:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.384 [2024-12-06 11:56:42.983750] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:45.384 11:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.384 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:45.384 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.384 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.384 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:45.384 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.384 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:45.384 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.384 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.384 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.384 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.384 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.384 11:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.384 11:56:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.384 11:56:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.384 11:56:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.384 11:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.384 "name": "Existed_Raid", 00:13:45.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.384 "strip_size_kb": 64, 00:13:45.384 "state": "configuring", 00:13:45.384 "raid_level": "raid5f", 00:13:45.384 "superblock": false, 00:13:45.384 "num_base_bdevs": 4, 00:13:45.384 "num_base_bdevs_discovered": 2, 00:13:45.384 "num_base_bdevs_operational": 4, 00:13:45.384 "base_bdevs_list": [ 00:13:45.384 { 00:13:45.384 "name": null, 00:13:45.384 "uuid": "3c34e455-9b4b-4adc-a7c5-063453416340", 00:13:45.384 "is_configured": false, 00:13:45.384 "data_offset": 0, 00:13:45.384 "data_size": 65536 00:13:45.384 }, 00:13:45.384 { 00:13:45.384 "name": null, 00:13:45.384 "uuid": "ae843e58-6178-4303-b293-19297ca63253", 00:13:45.384 "is_configured": false, 00:13:45.384 "data_offset": 0, 00:13:45.384 "data_size": 65536 00:13:45.384 }, 00:13:45.384 { 00:13:45.384 "name": "BaseBdev3", 00:13:45.384 "uuid": "c061642a-ff8d-4505-be20-04aa86930988", 00:13:45.384 "is_configured": true, 00:13:45.384 "data_offset": 0, 00:13:45.384 "data_size": 65536 00:13:45.384 }, 00:13:45.384 { 00:13:45.384 "name": "BaseBdev4", 00:13:45.384 "uuid": "61ffaddf-4b93-48e3-92be-b9cff9f6ae68", 00:13:45.384 "is_configured": true, 00:13:45.384 "data_offset": 0, 00:13:45.384 "data_size": 65536 00:13:45.384 } 00:13:45.384 ] 00:13:45.384 }' 00:13:45.384 11:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.384 11:56:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.953 11:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.953 11:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:45.953 11:56:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.953 11:56:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.953 11:56:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.953 11:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:45.953 11:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:45.953 11:56:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.953 11:56:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.953 [2024-12-06 11:56:43.513253] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:45.953 11:56:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.953 11:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:45.953 11:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.953 11:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.953 11:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:45.953 11:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.953 11:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:45.953 11:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.953 11:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.953 11:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.953 11:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.953 11:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.953 11:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.953 11:56:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.953 11:56:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.953 11:56:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.953 11:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.953 "name": "Existed_Raid", 00:13:45.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.953 "strip_size_kb": 64, 00:13:45.953 "state": "configuring", 00:13:45.953 "raid_level": "raid5f", 00:13:45.953 "superblock": false, 00:13:45.953 "num_base_bdevs": 4, 00:13:45.953 "num_base_bdevs_discovered": 3, 00:13:45.953 "num_base_bdevs_operational": 4, 00:13:45.953 "base_bdevs_list": [ 00:13:45.953 { 00:13:45.953 "name": null, 00:13:45.953 "uuid": "3c34e455-9b4b-4adc-a7c5-063453416340", 00:13:45.953 "is_configured": false, 00:13:45.953 "data_offset": 0, 00:13:45.954 "data_size": 65536 00:13:45.954 }, 00:13:45.954 { 00:13:45.954 "name": "BaseBdev2", 00:13:45.954 "uuid": "ae843e58-6178-4303-b293-19297ca63253", 00:13:45.954 "is_configured": true, 00:13:45.954 "data_offset": 0, 00:13:45.954 "data_size": 65536 00:13:45.954 }, 00:13:45.954 { 00:13:45.954 "name": "BaseBdev3", 00:13:45.954 "uuid": "c061642a-ff8d-4505-be20-04aa86930988", 00:13:45.954 "is_configured": true, 00:13:45.954 "data_offset": 0, 00:13:45.954 "data_size": 65536 00:13:45.954 }, 00:13:45.954 { 00:13:45.954 "name": "BaseBdev4", 00:13:45.954 "uuid": "61ffaddf-4b93-48e3-92be-b9cff9f6ae68", 00:13:45.954 "is_configured": true, 00:13:45.954 "data_offset": 0, 00:13:45.954 "data_size": 65536 00:13:45.954 } 00:13:45.954 ] 00:13:45.954 }' 00:13:45.954 11:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.954 11:56:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.214 11:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.214 11:56:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.214 11:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:46.214 11:56:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.214 11:56:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.474 11:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:46.474 11:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.474 11:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:46.474 11:56:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.474 11:56:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.474 11:56:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.474 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3c34e455-9b4b-4adc-a7c5-063453416340 00:13:46.474 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.474 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.474 [2024-12-06 11:56:44.034966] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:46.474 [2024-12-06 11:56:44.035014] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:13:46.474 [2024-12-06 11:56:44.035022] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:46.474 [2024-12-06 11:56:44.035312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:13:46.474 [2024-12-06 11:56:44.035763] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:13:46.474 [2024-12-06 11:56:44.035786] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:13:46.474 [2024-12-06 11:56:44.035948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.474 NewBaseBdev 00:13:46.474 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.474 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:46.474 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:46.474 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:46.474 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:46.474 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:46.474 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:46.474 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:46.474 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.474 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.474 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.474 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:46.474 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.474 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.474 [ 00:13:46.474 { 00:13:46.474 "name": "NewBaseBdev", 00:13:46.474 "aliases": [ 00:13:46.474 "3c34e455-9b4b-4adc-a7c5-063453416340" 00:13:46.474 ], 00:13:46.474 "product_name": "Malloc disk", 00:13:46.474 "block_size": 512, 00:13:46.474 "num_blocks": 65536, 00:13:46.474 "uuid": "3c34e455-9b4b-4adc-a7c5-063453416340", 00:13:46.474 "assigned_rate_limits": { 00:13:46.474 "rw_ios_per_sec": 0, 00:13:46.474 "rw_mbytes_per_sec": 0, 00:13:46.474 "r_mbytes_per_sec": 0, 00:13:46.474 "w_mbytes_per_sec": 0 00:13:46.474 }, 00:13:46.474 "claimed": true, 00:13:46.474 "claim_type": "exclusive_write", 00:13:46.474 "zoned": false, 00:13:46.474 "supported_io_types": { 00:13:46.474 "read": true, 00:13:46.474 "write": true, 00:13:46.474 "unmap": true, 00:13:46.474 "flush": true, 00:13:46.474 "reset": true, 00:13:46.475 "nvme_admin": false, 00:13:46.475 "nvme_io": false, 00:13:46.475 "nvme_io_md": false, 00:13:46.475 "write_zeroes": true, 00:13:46.475 "zcopy": true, 00:13:46.475 "get_zone_info": false, 00:13:46.475 "zone_management": false, 00:13:46.475 "zone_append": false, 00:13:46.475 "compare": false, 00:13:46.475 "compare_and_write": false, 00:13:46.475 "abort": true, 00:13:46.475 "seek_hole": false, 00:13:46.475 "seek_data": false, 00:13:46.475 "copy": true, 00:13:46.475 "nvme_iov_md": false 00:13:46.475 }, 00:13:46.475 "memory_domains": [ 00:13:46.475 { 00:13:46.475 "dma_device_id": "system", 00:13:46.475 "dma_device_type": 1 00:13:46.475 }, 00:13:46.475 { 00:13:46.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.475 "dma_device_type": 2 00:13:46.475 } 00:13:46.475 ], 00:13:46.475 "driver_specific": {} 00:13:46.475 } 00:13:46.475 ] 00:13:46.475 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.475 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:46.475 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:13:46.475 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.475 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.475 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:46.475 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.475 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:46.475 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.475 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.475 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.475 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.475 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.475 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.475 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.475 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.475 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.475 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.475 "name": "Existed_Raid", 00:13:46.475 "uuid": "02b78fa0-f657-45be-b2a5-c24fd4b1ca1a", 00:13:46.475 "strip_size_kb": 64, 00:13:46.475 "state": "online", 00:13:46.475 "raid_level": "raid5f", 00:13:46.475 "superblock": false, 00:13:46.475 "num_base_bdevs": 4, 00:13:46.475 "num_base_bdevs_discovered": 4, 00:13:46.475 "num_base_bdevs_operational": 4, 00:13:46.475 "base_bdevs_list": [ 00:13:46.475 { 00:13:46.475 "name": "NewBaseBdev", 00:13:46.475 "uuid": "3c34e455-9b4b-4adc-a7c5-063453416340", 00:13:46.475 "is_configured": true, 00:13:46.475 "data_offset": 0, 00:13:46.475 "data_size": 65536 00:13:46.475 }, 00:13:46.475 { 00:13:46.475 "name": "BaseBdev2", 00:13:46.475 "uuid": "ae843e58-6178-4303-b293-19297ca63253", 00:13:46.475 "is_configured": true, 00:13:46.475 "data_offset": 0, 00:13:46.475 "data_size": 65536 00:13:46.475 }, 00:13:46.475 { 00:13:46.475 "name": "BaseBdev3", 00:13:46.475 "uuid": "c061642a-ff8d-4505-be20-04aa86930988", 00:13:46.475 "is_configured": true, 00:13:46.475 "data_offset": 0, 00:13:46.475 "data_size": 65536 00:13:46.475 }, 00:13:46.475 { 00:13:46.475 "name": "BaseBdev4", 00:13:46.475 "uuid": "61ffaddf-4b93-48e3-92be-b9cff9f6ae68", 00:13:46.475 "is_configured": true, 00:13:46.475 "data_offset": 0, 00:13:46.475 "data_size": 65536 00:13:46.475 } 00:13:46.475 ] 00:13:46.475 }' 00:13:46.475 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.475 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.735 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:46.735 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:46.735 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:46.735 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:46.735 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:46.735 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:46.735 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:46.735 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:46.735 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.735 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.735 [2024-12-06 11:56:44.490460] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:46.996 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.996 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:46.996 "name": "Existed_Raid", 00:13:46.996 "aliases": [ 00:13:46.996 "02b78fa0-f657-45be-b2a5-c24fd4b1ca1a" 00:13:46.996 ], 00:13:46.996 "product_name": "Raid Volume", 00:13:46.996 "block_size": 512, 00:13:46.996 "num_blocks": 196608, 00:13:46.996 "uuid": "02b78fa0-f657-45be-b2a5-c24fd4b1ca1a", 00:13:46.996 "assigned_rate_limits": { 00:13:46.996 "rw_ios_per_sec": 0, 00:13:46.996 "rw_mbytes_per_sec": 0, 00:13:46.996 "r_mbytes_per_sec": 0, 00:13:46.996 "w_mbytes_per_sec": 0 00:13:46.996 }, 00:13:46.996 "claimed": false, 00:13:46.996 "zoned": false, 00:13:46.996 "supported_io_types": { 00:13:46.996 "read": true, 00:13:46.996 "write": true, 00:13:46.996 "unmap": false, 00:13:46.996 "flush": false, 00:13:46.996 "reset": true, 00:13:46.996 "nvme_admin": false, 00:13:46.996 "nvme_io": false, 00:13:46.996 "nvme_io_md": false, 00:13:46.996 "write_zeroes": true, 00:13:46.996 "zcopy": false, 00:13:46.996 "get_zone_info": false, 00:13:46.996 "zone_management": false, 00:13:46.996 "zone_append": false, 00:13:46.996 "compare": false, 00:13:46.996 "compare_and_write": false, 00:13:46.996 "abort": false, 00:13:46.996 "seek_hole": false, 00:13:46.996 "seek_data": false, 00:13:46.996 "copy": false, 00:13:46.996 "nvme_iov_md": false 00:13:46.996 }, 00:13:46.996 "driver_specific": { 00:13:46.996 "raid": { 00:13:46.996 "uuid": "02b78fa0-f657-45be-b2a5-c24fd4b1ca1a", 00:13:46.996 "strip_size_kb": 64, 00:13:46.996 "state": "online", 00:13:46.996 "raid_level": "raid5f", 00:13:46.996 "superblock": false, 00:13:46.996 "num_base_bdevs": 4, 00:13:46.996 "num_base_bdevs_discovered": 4, 00:13:46.996 "num_base_bdevs_operational": 4, 00:13:46.996 "base_bdevs_list": [ 00:13:46.996 { 00:13:46.996 "name": "NewBaseBdev", 00:13:46.996 "uuid": "3c34e455-9b4b-4adc-a7c5-063453416340", 00:13:46.996 "is_configured": true, 00:13:46.996 "data_offset": 0, 00:13:46.996 "data_size": 65536 00:13:46.996 }, 00:13:46.996 { 00:13:46.996 "name": "BaseBdev2", 00:13:46.996 "uuid": "ae843e58-6178-4303-b293-19297ca63253", 00:13:46.996 "is_configured": true, 00:13:46.996 "data_offset": 0, 00:13:46.996 "data_size": 65536 00:13:46.996 }, 00:13:46.996 { 00:13:46.996 "name": "BaseBdev3", 00:13:46.996 "uuid": "c061642a-ff8d-4505-be20-04aa86930988", 00:13:46.996 "is_configured": true, 00:13:46.996 "data_offset": 0, 00:13:46.996 "data_size": 65536 00:13:46.996 }, 00:13:46.996 { 00:13:46.996 "name": "BaseBdev4", 00:13:46.996 "uuid": "61ffaddf-4b93-48e3-92be-b9cff9f6ae68", 00:13:46.996 "is_configured": true, 00:13:46.996 "data_offset": 0, 00:13:46.996 "data_size": 65536 00:13:46.996 } 00:13:46.996 ] 00:13:46.996 } 00:13:46.996 } 00:13:46.996 }' 00:13:46.996 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:46.996 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:46.996 BaseBdev2 00:13:46.996 BaseBdev3 00:13:46.996 BaseBdev4' 00:13:46.996 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.996 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:46.996 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.996 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:46.996 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.996 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.996 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.996 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.996 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.996 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.996 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.996 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.996 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:46.996 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.996 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.996 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.996 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.996 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.996 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.996 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:46.996 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.996 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.996 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.996 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.258 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:47.258 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:47.258 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:47.258 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:47.258 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.258 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.258 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:47.258 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.258 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:47.258 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:47.258 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:47.258 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.258 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.258 [2024-12-06 11:56:44.817716] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:47.258 [2024-12-06 11:56:44.817741] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:47.258 [2024-12-06 11:56:44.817803] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:47.258 [2024-12-06 11:56:44.818035] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:47.258 [2024-12-06 11:56:44.818044] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:13:47.258 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.258 11:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 92839 00:13:47.258 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 92839 ']' 00:13:47.258 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 92839 00:13:47.258 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:13:47.258 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:47.258 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92839 00:13:47.258 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:47.258 killing process with pid 92839 00:13:47.258 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:47.258 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92839' 00:13:47.258 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 92839 00:13:47.258 [2024-12-06 11:56:44.865749] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:47.258 11:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 92839 00:13:47.258 [2024-12-06 11:56:44.907036] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:47.520 11:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:47.520 00:13:47.520 real 0m9.641s 00:13:47.520 user 0m16.484s 00:13:47.520 sys 0m2.050s 00:13:47.520 11:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:47.520 11:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.520 ************************************ 00:13:47.520 END TEST raid5f_state_function_test 00:13:47.520 ************************************ 00:13:47.520 11:56:45 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:13:47.520 11:56:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:47.520 11:56:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:47.520 11:56:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:47.520 ************************************ 00:13:47.520 START TEST raid5f_state_function_test_sb 00:13:47.520 ************************************ 00:13:47.520 11:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:13:47.520 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:47.520 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:47.520 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:47.520 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:47.520 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:47.520 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:47.520 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:47.520 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:47.520 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=93494 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93494' 00:13:47.521 Process raid pid: 93494 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 93494 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 93494 ']' 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:47.521 11:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.782 [2024-12-06 11:56:45.330021] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:47.782 [2024-12-06 11:56:45.330259] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.782 [2024-12-06 11:56:45.473050] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.782 [2024-12-06 11:56:45.516901] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.041 [2024-12-06 11:56:45.559002] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:48.041 [2024-12-06 11:56:45.559037] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:48.612 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:48.612 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:48.612 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:48.612 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.612 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.612 [2024-12-06 11:56:46.152245] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:48.612 [2024-12-06 11:56:46.152292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:48.612 [2024-12-06 11:56:46.152304] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:48.612 [2024-12-06 11:56:46.152313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:48.612 [2024-12-06 11:56:46.152319] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:48.612 [2024-12-06 11:56:46.152330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:48.612 [2024-12-06 11:56:46.152335] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:48.612 [2024-12-06 11:56:46.152343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:48.612 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.612 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:48.612 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.612 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.612 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:48.612 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.612 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:48.612 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.612 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.612 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.612 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.612 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.612 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.612 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.612 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.612 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.612 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.612 "name": "Existed_Raid", 00:13:48.612 "uuid": "d094b2c6-ab1a-4e42-abad-52a5d277e108", 00:13:48.612 "strip_size_kb": 64, 00:13:48.612 "state": "configuring", 00:13:48.612 "raid_level": "raid5f", 00:13:48.612 "superblock": true, 00:13:48.612 "num_base_bdevs": 4, 00:13:48.612 "num_base_bdevs_discovered": 0, 00:13:48.612 "num_base_bdevs_operational": 4, 00:13:48.612 "base_bdevs_list": [ 00:13:48.612 { 00:13:48.612 "name": "BaseBdev1", 00:13:48.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.612 "is_configured": false, 00:13:48.612 "data_offset": 0, 00:13:48.612 "data_size": 0 00:13:48.612 }, 00:13:48.612 { 00:13:48.612 "name": "BaseBdev2", 00:13:48.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.612 "is_configured": false, 00:13:48.612 "data_offset": 0, 00:13:48.612 "data_size": 0 00:13:48.612 }, 00:13:48.612 { 00:13:48.612 "name": "BaseBdev3", 00:13:48.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.612 "is_configured": false, 00:13:48.612 "data_offset": 0, 00:13:48.612 "data_size": 0 00:13:48.612 }, 00:13:48.612 { 00:13:48.612 "name": "BaseBdev4", 00:13:48.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.612 "is_configured": false, 00:13:48.612 "data_offset": 0, 00:13:48.612 "data_size": 0 00:13:48.612 } 00:13:48.612 ] 00:13:48.612 }' 00:13:48.612 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.612 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.872 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:48.872 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.872 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.872 [2024-12-06 11:56:46.595380] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:48.872 [2024-12-06 11:56:46.595484] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:13:48.872 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.872 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:48.872 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.872 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.872 [2024-12-06 11:56:46.607391] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:48.872 [2024-12-06 11:56:46.607476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:48.872 [2024-12-06 11:56:46.607501] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:48.872 [2024-12-06 11:56:46.607523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:48.872 [2024-12-06 11:56:46.607541] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:48.872 [2024-12-06 11:56:46.607561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:48.872 [2024-12-06 11:56:46.607577] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:48.872 [2024-12-06 11:56:46.607613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:48.872 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.872 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:48.872 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.873 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.873 [2024-12-06 11:56:46.628002] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:49.133 BaseBdev1 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.133 [ 00:13:49.133 { 00:13:49.133 "name": "BaseBdev1", 00:13:49.133 "aliases": [ 00:13:49.133 "718c3670-a248-4fc5-9129-d4e4dbee2da0" 00:13:49.133 ], 00:13:49.133 "product_name": "Malloc disk", 00:13:49.133 "block_size": 512, 00:13:49.133 "num_blocks": 65536, 00:13:49.133 "uuid": "718c3670-a248-4fc5-9129-d4e4dbee2da0", 00:13:49.133 "assigned_rate_limits": { 00:13:49.133 "rw_ios_per_sec": 0, 00:13:49.133 "rw_mbytes_per_sec": 0, 00:13:49.133 "r_mbytes_per_sec": 0, 00:13:49.133 "w_mbytes_per_sec": 0 00:13:49.133 }, 00:13:49.133 "claimed": true, 00:13:49.133 "claim_type": "exclusive_write", 00:13:49.133 "zoned": false, 00:13:49.133 "supported_io_types": { 00:13:49.133 "read": true, 00:13:49.133 "write": true, 00:13:49.133 "unmap": true, 00:13:49.133 "flush": true, 00:13:49.133 "reset": true, 00:13:49.133 "nvme_admin": false, 00:13:49.133 "nvme_io": false, 00:13:49.133 "nvme_io_md": false, 00:13:49.133 "write_zeroes": true, 00:13:49.133 "zcopy": true, 00:13:49.133 "get_zone_info": false, 00:13:49.133 "zone_management": false, 00:13:49.133 "zone_append": false, 00:13:49.133 "compare": false, 00:13:49.133 "compare_and_write": false, 00:13:49.133 "abort": true, 00:13:49.133 "seek_hole": false, 00:13:49.133 "seek_data": false, 00:13:49.133 "copy": true, 00:13:49.133 "nvme_iov_md": false 00:13:49.133 }, 00:13:49.133 "memory_domains": [ 00:13:49.133 { 00:13:49.133 "dma_device_id": "system", 00:13:49.133 "dma_device_type": 1 00:13:49.133 }, 00:13:49.133 { 00:13:49.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.133 "dma_device_type": 2 00:13:49.133 } 00:13:49.133 ], 00:13:49.133 "driver_specific": {} 00:13:49.133 } 00:13:49.133 ] 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.133 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.133 "name": "Existed_Raid", 00:13:49.133 "uuid": "52505a5e-078f-4c3b-8156-5ddf3ebd8244", 00:13:49.133 "strip_size_kb": 64, 00:13:49.133 "state": "configuring", 00:13:49.133 "raid_level": "raid5f", 00:13:49.133 "superblock": true, 00:13:49.133 "num_base_bdevs": 4, 00:13:49.133 "num_base_bdevs_discovered": 1, 00:13:49.133 "num_base_bdevs_operational": 4, 00:13:49.133 "base_bdevs_list": [ 00:13:49.133 { 00:13:49.133 "name": "BaseBdev1", 00:13:49.133 "uuid": "718c3670-a248-4fc5-9129-d4e4dbee2da0", 00:13:49.133 "is_configured": true, 00:13:49.133 "data_offset": 2048, 00:13:49.133 "data_size": 63488 00:13:49.133 }, 00:13:49.133 { 00:13:49.133 "name": "BaseBdev2", 00:13:49.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.134 "is_configured": false, 00:13:49.134 "data_offset": 0, 00:13:49.134 "data_size": 0 00:13:49.134 }, 00:13:49.134 { 00:13:49.134 "name": "BaseBdev3", 00:13:49.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.134 "is_configured": false, 00:13:49.134 "data_offset": 0, 00:13:49.134 "data_size": 0 00:13:49.134 }, 00:13:49.134 { 00:13:49.134 "name": "BaseBdev4", 00:13:49.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.134 "is_configured": false, 00:13:49.134 "data_offset": 0, 00:13:49.134 "data_size": 0 00:13:49.134 } 00:13:49.134 ] 00:13:49.134 }' 00:13:49.134 11:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.134 11:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.394 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:49.394 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.394 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.394 [2024-12-06 11:56:47.139202] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:49.394 [2024-12-06 11:56:47.139249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:13:49.394 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.394 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:49.394 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.394 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.654 [2024-12-06 11:56:47.151260] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:49.654 [2024-12-06 11:56:47.153022] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:49.654 [2024-12-06 11:56:47.153111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:49.654 [2024-12-06 11:56:47.153123] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:49.654 [2024-12-06 11:56:47.153132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:49.654 [2024-12-06 11:56:47.153139] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:49.654 [2024-12-06 11:56:47.153146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:49.654 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.654 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:49.654 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:49.654 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:49.654 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.654 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:49.654 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:49.654 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.654 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:49.654 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.654 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.654 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.654 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.654 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.654 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.654 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.654 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.654 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.654 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.654 "name": "Existed_Raid", 00:13:49.654 "uuid": "b7ffb792-014e-4232-9ff3-11da3b5f3885", 00:13:49.654 "strip_size_kb": 64, 00:13:49.654 "state": "configuring", 00:13:49.654 "raid_level": "raid5f", 00:13:49.654 "superblock": true, 00:13:49.654 "num_base_bdevs": 4, 00:13:49.654 "num_base_bdevs_discovered": 1, 00:13:49.654 "num_base_bdevs_operational": 4, 00:13:49.654 "base_bdevs_list": [ 00:13:49.654 { 00:13:49.654 "name": "BaseBdev1", 00:13:49.654 "uuid": "718c3670-a248-4fc5-9129-d4e4dbee2da0", 00:13:49.654 "is_configured": true, 00:13:49.654 "data_offset": 2048, 00:13:49.654 "data_size": 63488 00:13:49.654 }, 00:13:49.654 { 00:13:49.654 "name": "BaseBdev2", 00:13:49.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.654 "is_configured": false, 00:13:49.654 "data_offset": 0, 00:13:49.654 "data_size": 0 00:13:49.654 }, 00:13:49.654 { 00:13:49.654 "name": "BaseBdev3", 00:13:49.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.654 "is_configured": false, 00:13:49.654 "data_offset": 0, 00:13:49.654 "data_size": 0 00:13:49.654 }, 00:13:49.654 { 00:13:49.654 "name": "BaseBdev4", 00:13:49.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.654 "is_configured": false, 00:13:49.654 "data_offset": 0, 00:13:49.654 "data_size": 0 00:13:49.654 } 00:13:49.654 ] 00:13:49.654 }' 00:13:49.654 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.654 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.916 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:49.916 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.916 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.916 [2024-12-06 11:56:47.569515] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:49.916 BaseBdev2 00:13:49.916 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.917 [ 00:13:49.917 { 00:13:49.917 "name": "BaseBdev2", 00:13:49.917 "aliases": [ 00:13:49.917 "632cd660-4bed-4a17-9541-40d1b7bdbd28" 00:13:49.917 ], 00:13:49.917 "product_name": "Malloc disk", 00:13:49.917 "block_size": 512, 00:13:49.917 "num_blocks": 65536, 00:13:49.917 "uuid": "632cd660-4bed-4a17-9541-40d1b7bdbd28", 00:13:49.917 "assigned_rate_limits": { 00:13:49.917 "rw_ios_per_sec": 0, 00:13:49.917 "rw_mbytes_per_sec": 0, 00:13:49.917 "r_mbytes_per_sec": 0, 00:13:49.917 "w_mbytes_per_sec": 0 00:13:49.917 }, 00:13:49.917 "claimed": true, 00:13:49.917 "claim_type": "exclusive_write", 00:13:49.917 "zoned": false, 00:13:49.917 "supported_io_types": { 00:13:49.917 "read": true, 00:13:49.917 "write": true, 00:13:49.917 "unmap": true, 00:13:49.917 "flush": true, 00:13:49.917 "reset": true, 00:13:49.917 "nvme_admin": false, 00:13:49.917 "nvme_io": false, 00:13:49.917 "nvme_io_md": false, 00:13:49.917 "write_zeroes": true, 00:13:49.917 "zcopy": true, 00:13:49.917 "get_zone_info": false, 00:13:49.917 "zone_management": false, 00:13:49.917 "zone_append": false, 00:13:49.917 "compare": false, 00:13:49.917 "compare_and_write": false, 00:13:49.917 "abort": true, 00:13:49.917 "seek_hole": false, 00:13:49.917 "seek_data": false, 00:13:49.917 "copy": true, 00:13:49.917 "nvme_iov_md": false 00:13:49.917 }, 00:13:49.917 "memory_domains": [ 00:13:49.917 { 00:13:49.917 "dma_device_id": "system", 00:13:49.917 "dma_device_type": 1 00:13:49.917 }, 00:13:49.917 { 00:13:49.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.917 "dma_device_type": 2 00:13:49.917 } 00:13:49.917 ], 00:13:49.917 "driver_specific": {} 00:13:49.917 } 00:13:49.917 ] 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.917 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.917 "name": "Existed_Raid", 00:13:49.917 "uuid": "b7ffb792-014e-4232-9ff3-11da3b5f3885", 00:13:49.917 "strip_size_kb": 64, 00:13:49.917 "state": "configuring", 00:13:49.917 "raid_level": "raid5f", 00:13:49.917 "superblock": true, 00:13:49.917 "num_base_bdevs": 4, 00:13:49.917 "num_base_bdevs_discovered": 2, 00:13:49.917 "num_base_bdevs_operational": 4, 00:13:49.917 "base_bdevs_list": [ 00:13:49.917 { 00:13:49.917 "name": "BaseBdev1", 00:13:49.917 "uuid": "718c3670-a248-4fc5-9129-d4e4dbee2da0", 00:13:49.917 "is_configured": true, 00:13:49.917 "data_offset": 2048, 00:13:49.917 "data_size": 63488 00:13:49.917 }, 00:13:49.917 { 00:13:49.917 "name": "BaseBdev2", 00:13:49.917 "uuid": "632cd660-4bed-4a17-9541-40d1b7bdbd28", 00:13:49.917 "is_configured": true, 00:13:49.917 "data_offset": 2048, 00:13:49.917 "data_size": 63488 00:13:49.917 }, 00:13:49.917 { 00:13:49.917 "name": "BaseBdev3", 00:13:49.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.917 "is_configured": false, 00:13:49.917 "data_offset": 0, 00:13:49.917 "data_size": 0 00:13:49.917 }, 00:13:49.917 { 00:13:49.917 "name": "BaseBdev4", 00:13:49.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.917 "is_configured": false, 00:13:49.917 "data_offset": 0, 00:13:49.917 "data_size": 0 00:13:49.918 } 00:13:49.918 ] 00:13:49.918 }' 00:13:49.918 11:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.918 11:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.499 [2024-12-06 11:56:48.075384] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:50.499 BaseBdev3 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.499 [ 00:13:50.499 { 00:13:50.499 "name": "BaseBdev3", 00:13:50.499 "aliases": [ 00:13:50.499 "400361b9-ce12-4b78-916f-6b25aad0bcf0" 00:13:50.499 ], 00:13:50.499 "product_name": "Malloc disk", 00:13:50.499 "block_size": 512, 00:13:50.499 "num_blocks": 65536, 00:13:50.499 "uuid": "400361b9-ce12-4b78-916f-6b25aad0bcf0", 00:13:50.499 "assigned_rate_limits": { 00:13:50.499 "rw_ios_per_sec": 0, 00:13:50.499 "rw_mbytes_per_sec": 0, 00:13:50.499 "r_mbytes_per_sec": 0, 00:13:50.499 "w_mbytes_per_sec": 0 00:13:50.499 }, 00:13:50.499 "claimed": true, 00:13:50.499 "claim_type": "exclusive_write", 00:13:50.499 "zoned": false, 00:13:50.499 "supported_io_types": { 00:13:50.499 "read": true, 00:13:50.499 "write": true, 00:13:50.499 "unmap": true, 00:13:50.499 "flush": true, 00:13:50.499 "reset": true, 00:13:50.499 "nvme_admin": false, 00:13:50.499 "nvme_io": false, 00:13:50.499 "nvme_io_md": false, 00:13:50.499 "write_zeroes": true, 00:13:50.499 "zcopy": true, 00:13:50.499 "get_zone_info": false, 00:13:50.499 "zone_management": false, 00:13:50.499 "zone_append": false, 00:13:50.499 "compare": false, 00:13:50.499 "compare_and_write": false, 00:13:50.499 "abort": true, 00:13:50.499 "seek_hole": false, 00:13:50.499 "seek_data": false, 00:13:50.499 "copy": true, 00:13:50.499 "nvme_iov_md": false 00:13:50.499 }, 00:13:50.499 "memory_domains": [ 00:13:50.499 { 00:13:50.499 "dma_device_id": "system", 00:13:50.499 "dma_device_type": 1 00:13:50.499 }, 00:13:50.499 { 00:13:50.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.499 "dma_device_type": 2 00:13:50.499 } 00:13:50.499 ], 00:13:50.499 "driver_specific": {} 00:13:50.499 } 00:13:50.499 ] 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.499 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.499 "name": "Existed_Raid", 00:13:50.499 "uuid": "b7ffb792-014e-4232-9ff3-11da3b5f3885", 00:13:50.499 "strip_size_kb": 64, 00:13:50.499 "state": "configuring", 00:13:50.499 "raid_level": "raid5f", 00:13:50.499 "superblock": true, 00:13:50.499 "num_base_bdevs": 4, 00:13:50.499 "num_base_bdevs_discovered": 3, 00:13:50.500 "num_base_bdevs_operational": 4, 00:13:50.500 "base_bdevs_list": [ 00:13:50.500 { 00:13:50.500 "name": "BaseBdev1", 00:13:50.500 "uuid": "718c3670-a248-4fc5-9129-d4e4dbee2da0", 00:13:50.500 "is_configured": true, 00:13:50.500 "data_offset": 2048, 00:13:50.500 "data_size": 63488 00:13:50.500 }, 00:13:50.500 { 00:13:50.500 "name": "BaseBdev2", 00:13:50.500 "uuid": "632cd660-4bed-4a17-9541-40d1b7bdbd28", 00:13:50.500 "is_configured": true, 00:13:50.500 "data_offset": 2048, 00:13:50.500 "data_size": 63488 00:13:50.500 }, 00:13:50.500 { 00:13:50.500 "name": "BaseBdev3", 00:13:50.500 "uuid": "400361b9-ce12-4b78-916f-6b25aad0bcf0", 00:13:50.500 "is_configured": true, 00:13:50.500 "data_offset": 2048, 00:13:50.500 "data_size": 63488 00:13:50.500 }, 00:13:50.500 { 00:13:50.500 "name": "BaseBdev4", 00:13:50.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.500 "is_configured": false, 00:13:50.500 "data_offset": 0, 00:13:50.500 "data_size": 0 00:13:50.500 } 00:13:50.500 ] 00:13:50.500 }' 00:13:50.500 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.500 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.760 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:50.760 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.760 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.760 [2024-12-06 11:56:48.505564] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:50.760 [2024-12-06 11:56:48.505882] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:13:50.760 [2024-12-06 11:56:48.505932] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:50.761 [2024-12-06 11:56:48.506211] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:50.761 BaseBdev4 00:13:50.761 [2024-12-06 11:56:48.506750] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:13:50.761 [2024-12-06 11:56:48.506819] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:13:50.761 [2024-12-06 11:56:48.506985] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.761 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.761 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:50.761 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:50.761 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:50.761 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:50.761 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:50.761 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:50.761 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:50.761 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.761 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.020 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.021 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:51.021 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.021 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.021 [ 00:13:51.021 { 00:13:51.021 "name": "BaseBdev4", 00:13:51.021 "aliases": [ 00:13:51.021 "af698035-33e5-4526-8dc2-fafc00e8cdea" 00:13:51.021 ], 00:13:51.021 "product_name": "Malloc disk", 00:13:51.021 "block_size": 512, 00:13:51.021 "num_blocks": 65536, 00:13:51.021 "uuid": "af698035-33e5-4526-8dc2-fafc00e8cdea", 00:13:51.021 "assigned_rate_limits": { 00:13:51.021 "rw_ios_per_sec": 0, 00:13:51.021 "rw_mbytes_per_sec": 0, 00:13:51.021 "r_mbytes_per_sec": 0, 00:13:51.021 "w_mbytes_per_sec": 0 00:13:51.021 }, 00:13:51.021 "claimed": true, 00:13:51.021 "claim_type": "exclusive_write", 00:13:51.021 "zoned": false, 00:13:51.021 "supported_io_types": { 00:13:51.021 "read": true, 00:13:51.021 "write": true, 00:13:51.021 "unmap": true, 00:13:51.021 "flush": true, 00:13:51.021 "reset": true, 00:13:51.021 "nvme_admin": false, 00:13:51.021 "nvme_io": false, 00:13:51.021 "nvme_io_md": false, 00:13:51.021 "write_zeroes": true, 00:13:51.021 "zcopy": true, 00:13:51.021 "get_zone_info": false, 00:13:51.021 "zone_management": false, 00:13:51.021 "zone_append": false, 00:13:51.021 "compare": false, 00:13:51.021 "compare_and_write": false, 00:13:51.021 "abort": true, 00:13:51.021 "seek_hole": false, 00:13:51.021 "seek_data": false, 00:13:51.021 "copy": true, 00:13:51.021 "nvme_iov_md": false 00:13:51.021 }, 00:13:51.021 "memory_domains": [ 00:13:51.021 { 00:13:51.021 "dma_device_id": "system", 00:13:51.021 "dma_device_type": 1 00:13:51.021 }, 00:13:51.021 { 00:13:51.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.021 "dma_device_type": 2 00:13:51.021 } 00:13:51.021 ], 00:13:51.021 "driver_specific": {} 00:13:51.021 } 00:13:51.021 ] 00:13:51.021 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.021 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:51.021 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:51.021 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:51.021 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:13:51.021 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.021 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.021 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:51.021 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.021 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:51.021 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.021 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.021 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.021 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.021 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.021 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.021 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.021 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.021 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.021 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.021 "name": "Existed_Raid", 00:13:51.021 "uuid": "b7ffb792-014e-4232-9ff3-11da3b5f3885", 00:13:51.021 "strip_size_kb": 64, 00:13:51.021 "state": "online", 00:13:51.021 "raid_level": "raid5f", 00:13:51.021 "superblock": true, 00:13:51.021 "num_base_bdevs": 4, 00:13:51.021 "num_base_bdevs_discovered": 4, 00:13:51.021 "num_base_bdevs_operational": 4, 00:13:51.021 "base_bdevs_list": [ 00:13:51.021 { 00:13:51.021 "name": "BaseBdev1", 00:13:51.021 "uuid": "718c3670-a248-4fc5-9129-d4e4dbee2da0", 00:13:51.021 "is_configured": true, 00:13:51.021 "data_offset": 2048, 00:13:51.021 "data_size": 63488 00:13:51.021 }, 00:13:51.021 { 00:13:51.021 "name": "BaseBdev2", 00:13:51.021 "uuid": "632cd660-4bed-4a17-9541-40d1b7bdbd28", 00:13:51.021 "is_configured": true, 00:13:51.021 "data_offset": 2048, 00:13:51.021 "data_size": 63488 00:13:51.021 }, 00:13:51.021 { 00:13:51.021 "name": "BaseBdev3", 00:13:51.021 "uuid": "400361b9-ce12-4b78-916f-6b25aad0bcf0", 00:13:51.021 "is_configured": true, 00:13:51.021 "data_offset": 2048, 00:13:51.021 "data_size": 63488 00:13:51.021 }, 00:13:51.021 { 00:13:51.021 "name": "BaseBdev4", 00:13:51.021 "uuid": "af698035-33e5-4526-8dc2-fafc00e8cdea", 00:13:51.021 "is_configured": true, 00:13:51.021 "data_offset": 2048, 00:13:51.021 "data_size": 63488 00:13:51.021 } 00:13:51.021 ] 00:13:51.021 }' 00:13:51.021 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.021 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.281 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:51.281 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:51.281 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:51.281 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:51.281 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:51.281 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:51.281 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:51.281 11:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:51.281 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.281 11:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.281 [2024-12-06 11:56:48.992899] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:51.281 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.281 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:51.281 "name": "Existed_Raid", 00:13:51.281 "aliases": [ 00:13:51.281 "b7ffb792-014e-4232-9ff3-11da3b5f3885" 00:13:51.281 ], 00:13:51.281 "product_name": "Raid Volume", 00:13:51.281 "block_size": 512, 00:13:51.281 "num_blocks": 190464, 00:13:51.281 "uuid": "b7ffb792-014e-4232-9ff3-11da3b5f3885", 00:13:51.281 "assigned_rate_limits": { 00:13:51.281 "rw_ios_per_sec": 0, 00:13:51.281 "rw_mbytes_per_sec": 0, 00:13:51.281 "r_mbytes_per_sec": 0, 00:13:51.281 "w_mbytes_per_sec": 0 00:13:51.281 }, 00:13:51.281 "claimed": false, 00:13:51.281 "zoned": false, 00:13:51.281 "supported_io_types": { 00:13:51.281 "read": true, 00:13:51.281 "write": true, 00:13:51.281 "unmap": false, 00:13:51.281 "flush": false, 00:13:51.281 "reset": true, 00:13:51.281 "nvme_admin": false, 00:13:51.281 "nvme_io": false, 00:13:51.281 "nvme_io_md": false, 00:13:51.281 "write_zeroes": true, 00:13:51.281 "zcopy": false, 00:13:51.281 "get_zone_info": false, 00:13:51.281 "zone_management": false, 00:13:51.281 "zone_append": false, 00:13:51.281 "compare": false, 00:13:51.281 "compare_and_write": false, 00:13:51.281 "abort": false, 00:13:51.281 "seek_hole": false, 00:13:51.281 "seek_data": false, 00:13:51.281 "copy": false, 00:13:51.281 "nvme_iov_md": false 00:13:51.281 }, 00:13:51.281 "driver_specific": { 00:13:51.281 "raid": { 00:13:51.281 "uuid": "b7ffb792-014e-4232-9ff3-11da3b5f3885", 00:13:51.281 "strip_size_kb": 64, 00:13:51.281 "state": "online", 00:13:51.282 "raid_level": "raid5f", 00:13:51.282 "superblock": true, 00:13:51.282 "num_base_bdevs": 4, 00:13:51.282 "num_base_bdevs_discovered": 4, 00:13:51.282 "num_base_bdevs_operational": 4, 00:13:51.282 "base_bdevs_list": [ 00:13:51.282 { 00:13:51.282 "name": "BaseBdev1", 00:13:51.282 "uuid": "718c3670-a248-4fc5-9129-d4e4dbee2da0", 00:13:51.282 "is_configured": true, 00:13:51.282 "data_offset": 2048, 00:13:51.282 "data_size": 63488 00:13:51.282 }, 00:13:51.282 { 00:13:51.282 "name": "BaseBdev2", 00:13:51.282 "uuid": "632cd660-4bed-4a17-9541-40d1b7bdbd28", 00:13:51.282 "is_configured": true, 00:13:51.282 "data_offset": 2048, 00:13:51.282 "data_size": 63488 00:13:51.282 }, 00:13:51.282 { 00:13:51.282 "name": "BaseBdev3", 00:13:51.282 "uuid": "400361b9-ce12-4b78-916f-6b25aad0bcf0", 00:13:51.282 "is_configured": true, 00:13:51.282 "data_offset": 2048, 00:13:51.282 "data_size": 63488 00:13:51.282 }, 00:13:51.282 { 00:13:51.282 "name": "BaseBdev4", 00:13:51.282 "uuid": "af698035-33e5-4526-8dc2-fafc00e8cdea", 00:13:51.282 "is_configured": true, 00:13:51.282 "data_offset": 2048, 00:13:51.282 "data_size": 63488 00:13:51.282 } 00:13:51.282 ] 00:13:51.282 } 00:13:51.282 } 00:13:51.282 }' 00:13:51.282 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:51.542 BaseBdev2 00:13:51.542 BaseBdev3 00:13:51.542 BaseBdev4' 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.542 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.802 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.802 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.802 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:51.802 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.802 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.802 [2024-12-06 11:56:49.316212] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:51.802 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.802 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:51.802 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:51.802 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:51.802 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:51.802 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:51.802 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:51.802 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.802 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.802 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:51.802 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.802 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.802 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.802 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.802 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.802 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.803 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.803 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.803 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.803 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.803 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.803 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.803 "name": "Existed_Raid", 00:13:51.803 "uuid": "b7ffb792-014e-4232-9ff3-11da3b5f3885", 00:13:51.803 "strip_size_kb": 64, 00:13:51.803 "state": "online", 00:13:51.803 "raid_level": "raid5f", 00:13:51.803 "superblock": true, 00:13:51.803 "num_base_bdevs": 4, 00:13:51.803 "num_base_bdevs_discovered": 3, 00:13:51.803 "num_base_bdevs_operational": 3, 00:13:51.803 "base_bdevs_list": [ 00:13:51.803 { 00:13:51.803 "name": null, 00:13:51.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.803 "is_configured": false, 00:13:51.803 "data_offset": 0, 00:13:51.803 "data_size": 63488 00:13:51.803 }, 00:13:51.803 { 00:13:51.803 "name": "BaseBdev2", 00:13:51.803 "uuid": "632cd660-4bed-4a17-9541-40d1b7bdbd28", 00:13:51.803 "is_configured": true, 00:13:51.803 "data_offset": 2048, 00:13:51.803 "data_size": 63488 00:13:51.803 }, 00:13:51.803 { 00:13:51.803 "name": "BaseBdev3", 00:13:51.803 "uuid": "400361b9-ce12-4b78-916f-6b25aad0bcf0", 00:13:51.803 "is_configured": true, 00:13:51.803 "data_offset": 2048, 00:13:51.803 "data_size": 63488 00:13:51.803 }, 00:13:51.803 { 00:13:51.803 "name": "BaseBdev4", 00:13:51.803 "uuid": "af698035-33e5-4526-8dc2-fafc00e8cdea", 00:13:51.803 "is_configured": true, 00:13:51.803 "data_offset": 2048, 00:13:51.803 "data_size": 63488 00:13:51.803 } 00:13:51.803 ] 00:13:51.803 }' 00:13:51.803 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.803 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.063 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:52.063 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:52.063 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.063 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:52.063 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.063 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.063 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.323 [2024-12-06 11:56:49.850641] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:52.323 [2024-12-06 11:56:49.850841] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:52.323 [2024-12-06 11:56:49.861987] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.323 [2024-12-06 11:56:49.921895] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.323 [2024-12-06 11:56:49.988890] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:52.323 [2024-12-06 11:56:49.988933] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:13:52.323 11:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.323 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:52.323 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:52.323 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.323 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:52.323 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.323 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.323 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.323 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:52.323 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:52.323 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:52.323 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:52.323 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:52.323 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:52.323 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.323 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.323 BaseBdev2 00:13:52.323 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.323 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:52.323 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:52.323 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:52.323 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:52.323 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:52.323 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:52.323 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:52.323 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.323 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.583 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.583 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:52.583 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.583 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.583 [ 00:13:52.583 { 00:13:52.583 "name": "BaseBdev2", 00:13:52.583 "aliases": [ 00:13:52.583 "470ed1ba-4f09-4d09-b789-7ca3911178d4" 00:13:52.583 ], 00:13:52.583 "product_name": "Malloc disk", 00:13:52.583 "block_size": 512, 00:13:52.583 "num_blocks": 65536, 00:13:52.583 "uuid": "470ed1ba-4f09-4d09-b789-7ca3911178d4", 00:13:52.583 "assigned_rate_limits": { 00:13:52.583 "rw_ios_per_sec": 0, 00:13:52.583 "rw_mbytes_per_sec": 0, 00:13:52.583 "r_mbytes_per_sec": 0, 00:13:52.583 "w_mbytes_per_sec": 0 00:13:52.583 }, 00:13:52.583 "claimed": false, 00:13:52.583 "zoned": false, 00:13:52.583 "supported_io_types": { 00:13:52.583 "read": true, 00:13:52.583 "write": true, 00:13:52.583 "unmap": true, 00:13:52.583 "flush": true, 00:13:52.583 "reset": true, 00:13:52.583 "nvme_admin": false, 00:13:52.583 "nvme_io": false, 00:13:52.583 "nvme_io_md": false, 00:13:52.583 "write_zeroes": true, 00:13:52.583 "zcopy": true, 00:13:52.583 "get_zone_info": false, 00:13:52.583 "zone_management": false, 00:13:52.583 "zone_append": false, 00:13:52.583 "compare": false, 00:13:52.583 "compare_and_write": false, 00:13:52.583 "abort": true, 00:13:52.583 "seek_hole": false, 00:13:52.583 "seek_data": false, 00:13:52.583 "copy": true, 00:13:52.583 "nvme_iov_md": false 00:13:52.583 }, 00:13:52.583 "memory_domains": [ 00:13:52.583 { 00:13:52.583 "dma_device_id": "system", 00:13:52.583 "dma_device_type": 1 00:13:52.583 }, 00:13:52.583 { 00:13:52.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.583 "dma_device_type": 2 00:13:52.583 } 00:13:52.583 ], 00:13:52.583 "driver_specific": {} 00:13:52.583 } 00:13:52.583 ] 00:13:52.583 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.583 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:52.583 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:52.583 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:52.583 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:52.583 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.583 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.583 BaseBdev3 00:13:52.583 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.583 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:52.583 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:52.583 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:52.583 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:52.583 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.584 [ 00:13:52.584 { 00:13:52.584 "name": "BaseBdev3", 00:13:52.584 "aliases": [ 00:13:52.584 "b40d531a-eaa1-4842-bf9d-3593b8d4dcf6" 00:13:52.584 ], 00:13:52.584 "product_name": "Malloc disk", 00:13:52.584 "block_size": 512, 00:13:52.584 "num_blocks": 65536, 00:13:52.584 "uuid": "b40d531a-eaa1-4842-bf9d-3593b8d4dcf6", 00:13:52.584 "assigned_rate_limits": { 00:13:52.584 "rw_ios_per_sec": 0, 00:13:52.584 "rw_mbytes_per_sec": 0, 00:13:52.584 "r_mbytes_per_sec": 0, 00:13:52.584 "w_mbytes_per_sec": 0 00:13:52.584 }, 00:13:52.584 "claimed": false, 00:13:52.584 "zoned": false, 00:13:52.584 "supported_io_types": { 00:13:52.584 "read": true, 00:13:52.584 "write": true, 00:13:52.584 "unmap": true, 00:13:52.584 "flush": true, 00:13:52.584 "reset": true, 00:13:52.584 "nvme_admin": false, 00:13:52.584 "nvme_io": false, 00:13:52.584 "nvme_io_md": false, 00:13:52.584 "write_zeroes": true, 00:13:52.584 "zcopy": true, 00:13:52.584 "get_zone_info": false, 00:13:52.584 "zone_management": false, 00:13:52.584 "zone_append": false, 00:13:52.584 "compare": false, 00:13:52.584 "compare_and_write": false, 00:13:52.584 "abort": true, 00:13:52.584 "seek_hole": false, 00:13:52.584 "seek_data": false, 00:13:52.584 "copy": true, 00:13:52.584 "nvme_iov_md": false 00:13:52.584 }, 00:13:52.584 "memory_domains": [ 00:13:52.584 { 00:13:52.584 "dma_device_id": "system", 00:13:52.584 "dma_device_type": 1 00:13:52.584 }, 00:13:52.584 { 00:13:52.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.584 "dma_device_type": 2 00:13:52.584 } 00:13:52.584 ], 00:13:52.584 "driver_specific": {} 00:13:52.584 } 00:13:52.584 ] 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.584 BaseBdev4 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.584 [ 00:13:52.584 { 00:13:52.584 "name": "BaseBdev4", 00:13:52.584 "aliases": [ 00:13:52.584 "7779bdd7-4787-4f93-a6e0-e8397e83ad38" 00:13:52.584 ], 00:13:52.584 "product_name": "Malloc disk", 00:13:52.584 "block_size": 512, 00:13:52.584 "num_blocks": 65536, 00:13:52.584 "uuid": "7779bdd7-4787-4f93-a6e0-e8397e83ad38", 00:13:52.584 "assigned_rate_limits": { 00:13:52.584 "rw_ios_per_sec": 0, 00:13:52.584 "rw_mbytes_per_sec": 0, 00:13:52.584 "r_mbytes_per_sec": 0, 00:13:52.584 "w_mbytes_per_sec": 0 00:13:52.584 }, 00:13:52.584 "claimed": false, 00:13:52.584 "zoned": false, 00:13:52.584 "supported_io_types": { 00:13:52.584 "read": true, 00:13:52.584 "write": true, 00:13:52.584 "unmap": true, 00:13:52.584 "flush": true, 00:13:52.584 "reset": true, 00:13:52.584 "nvme_admin": false, 00:13:52.584 "nvme_io": false, 00:13:52.584 "nvme_io_md": false, 00:13:52.584 "write_zeroes": true, 00:13:52.584 "zcopy": true, 00:13:52.584 "get_zone_info": false, 00:13:52.584 "zone_management": false, 00:13:52.584 "zone_append": false, 00:13:52.584 "compare": false, 00:13:52.584 "compare_and_write": false, 00:13:52.584 "abort": true, 00:13:52.584 "seek_hole": false, 00:13:52.584 "seek_data": false, 00:13:52.584 "copy": true, 00:13:52.584 "nvme_iov_md": false 00:13:52.584 }, 00:13:52.584 "memory_domains": [ 00:13:52.584 { 00:13:52.584 "dma_device_id": "system", 00:13:52.584 "dma_device_type": 1 00:13:52.584 }, 00:13:52.584 { 00:13:52.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.584 "dma_device_type": 2 00:13:52.584 } 00:13:52.584 ], 00:13:52.584 "driver_specific": {} 00:13:52.584 } 00:13:52.584 ] 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.584 [2024-12-06 11:56:50.215197] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:52.584 [2024-12-06 11:56:50.215319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:52.584 [2024-12-06 11:56:50.215368] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:52.584 [2024-12-06 11:56:50.217129] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:52.584 [2024-12-06 11:56:50.217215] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.584 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.584 "name": "Existed_Raid", 00:13:52.584 "uuid": "275d76b8-2ceb-4d30-8f77-158a612eaa6b", 00:13:52.584 "strip_size_kb": 64, 00:13:52.584 "state": "configuring", 00:13:52.584 "raid_level": "raid5f", 00:13:52.584 "superblock": true, 00:13:52.584 "num_base_bdevs": 4, 00:13:52.584 "num_base_bdevs_discovered": 3, 00:13:52.584 "num_base_bdevs_operational": 4, 00:13:52.584 "base_bdevs_list": [ 00:13:52.584 { 00:13:52.584 "name": "BaseBdev1", 00:13:52.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.584 "is_configured": false, 00:13:52.584 "data_offset": 0, 00:13:52.584 "data_size": 0 00:13:52.584 }, 00:13:52.584 { 00:13:52.584 "name": "BaseBdev2", 00:13:52.584 "uuid": "470ed1ba-4f09-4d09-b789-7ca3911178d4", 00:13:52.584 "is_configured": true, 00:13:52.584 "data_offset": 2048, 00:13:52.584 "data_size": 63488 00:13:52.584 }, 00:13:52.584 { 00:13:52.585 "name": "BaseBdev3", 00:13:52.585 "uuid": "b40d531a-eaa1-4842-bf9d-3593b8d4dcf6", 00:13:52.585 "is_configured": true, 00:13:52.585 "data_offset": 2048, 00:13:52.585 "data_size": 63488 00:13:52.585 }, 00:13:52.585 { 00:13:52.585 "name": "BaseBdev4", 00:13:52.585 "uuid": "7779bdd7-4787-4f93-a6e0-e8397e83ad38", 00:13:52.585 "is_configured": true, 00:13:52.585 "data_offset": 2048, 00:13:52.585 "data_size": 63488 00:13:52.585 } 00:13:52.585 ] 00:13:52.585 }' 00:13:52.585 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.585 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.173 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:53.173 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.173 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.173 [2024-12-06 11:56:50.702339] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:53.173 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.173 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:53.173 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.173 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.173 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:53.173 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.173 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:53.173 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.173 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.173 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.173 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.173 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.173 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.173 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.173 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.173 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.173 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.173 "name": "Existed_Raid", 00:13:53.173 "uuid": "275d76b8-2ceb-4d30-8f77-158a612eaa6b", 00:13:53.173 "strip_size_kb": 64, 00:13:53.173 "state": "configuring", 00:13:53.173 "raid_level": "raid5f", 00:13:53.173 "superblock": true, 00:13:53.173 "num_base_bdevs": 4, 00:13:53.173 "num_base_bdevs_discovered": 2, 00:13:53.173 "num_base_bdevs_operational": 4, 00:13:53.173 "base_bdevs_list": [ 00:13:53.173 { 00:13:53.173 "name": "BaseBdev1", 00:13:53.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.173 "is_configured": false, 00:13:53.173 "data_offset": 0, 00:13:53.173 "data_size": 0 00:13:53.173 }, 00:13:53.173 { 00:13:53.173 "name": null, 00:13:53.173 "uuid": "470ed1ba-4f09-4d09-b789-7ca3911178d4", 00:13:53.173 "is_configured": false, 00:13:53.173 "data_offset": 0, 00:13:53.173 "data_size": 63488 00:13:53.173 }, 00:13:53.173 { 00:13:53.173 "name": "BaseBdev3", 00:13:53.173 "uuid": "b40d531a-eaa1-4842-bf9d-3593b8d4dcf6", 00:13:53.173 "is_configured": true, 00:13:53.173 "data_offset": 2048, 00:13:53.173 "data_size": 63488 00:13:53.173 }, 00:13:53.173 { 00:13:53.173 "name": "BaseBdev4", 00:13:53.173 "uuid": "7779bdd7-4787-4f93-a6e0-e8397e83ad38", 00:13:53.173 "is_configured": true, 00:13:53.173 "data_offset": 2048, 00:13:53.173 "data_size": 63488 00:13:53.173 } 00:13:53.173 ] 00:13:53.173 }' 00:13:53.173 11:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.173 11:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.433 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.433 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:53.433 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.433 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.433 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.693 [2024-12-06 11:56:51.228355] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:53.693 BaseBdev1 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.693 [ 00:13:53.693 { 00:13:53.693 "name": "BaseBdev1", 00:13:53.693 "aliases": [ 00:13:53.693 "7f2bc200-cd4c-4a6d-8fde-681eab65c577" 00:13:53.693 ], 00:13:53.693 "product_name": "Malloc disk", 00:13:53.693 "block_size": 512, 00:13:53.693 "num_blocks": 65536, 00:13:53.693 "uuid": "7f2bc200-cd4c-4a6d-8fde-681eab65c577", 00:13:53.693 "assigned_rate_limits": { 00:13:53.693 "rw_ios_per_sec": 0, 00:13:53.693 "rw_mbytes_per_sec": 0, 00:13:53.693 "r_mbytes_per_sec": 0, 00:13:53.693 "w_mbytes_per_sec": 0 00:13:53.693 }, 00:13:53.693 "claimed": true, 00:13:53.693 "claim_type": "exclusive_write", 00:13:53.693 "zoned": false, 00:13:53.693 "supported_io_types": { 00:13:53.693 "read": true, 00:13:53.693 "write": true, 00:13:53.693 "unmap": true, 00:13:53.693 "flush": true, 00:13:53.693 "reset": true, 00:13:53.693 "nvme_admin": false, 00:13:53.693 "nvme_io": false, 00:13:53.693 "nvme_io_md": false, 00:13:53.693 "write_zeroes": true, 00:13:53.693 "zcopy": true, 00:13:53.693 "get_zone_info": false, 00:13:53.693 "zone_management": false, 00:13:53.693 "zone_append": false, 00:13:53.693 "compare": false, 00:13:53.693 "compare_and_write": false, 00:13:53.693 "abort": true, 00:13:53.693 "seek_hole": false, 00:13:53.693 "seek_data": false, 00:13:53.693 "copy": true, 00:13:53.693 "nvme_iov_md": false 00:13:53.693 }, 00:13:53.693 "memory_domains": [ 00:13:53.693 { 00:13:53.693 "dma_device_id": "system", 00:13:53.693 "dma_device_type": 1 00:13:53.693 }, 00:13:53.693 { 00:13:53.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.693 "dma_device_type": 2 00:13:53.693 } 00:13:53.693 ], 00:13:53.693 "driver_specific": {} 00:13:53.693 } 00:13:53.693 ] 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.693 "name": "Existed_Raid", 00:13:53.693 "uuid": "275d76b8-2ceb-4d30-8f77-158a612eaa6b", 00:13:53.693 "strip_size_kb": 64, 00:13:53.693 "state": "configuring", 00:13:53.693 "raid_level": "raid5f", 00:13:53.693 "superblock": true, 00:13:53.693 "num_base_bdevs": 4, 00:13:53.693 "num_base_bdevs_discovered": 3, 00:13:53.693 "num_base_bdevs_operational": 4, 00:13:53.693 "base_bdevs_list": [ 00:13:53.693 { 00:13:53.693 "name": "BaseBdev1", 00:13:53.693 "uuid": "7f2bc200-cd4c-4a6d-8fde-681eab65c577", 00:13:53.693 "is_configured": true, 00:13:53.693 "data_offset": 2048, 00:13:53.693 "data_size": 63488 00:13:53.693 }, 00:13:53.693 { 00:13:53.693 "name": null, 00:13:53.693 "uuid": "470ed1ba-4f09-4d09-b789-7ca3911178d4", 00:13:53.693 "is_configured": false, 00:13:53.693 "data_offset": 0, 00:13:53.693 "data_size": 63488 00:13:53.693 }, 00:13:53.693 { 00:13:53.693 "name": "BaseBdev3", 00:13:53.693 "uuid": "b40d531a-eaa1-4842-bf9d-3593b8d4dcf6", 00:13:53.693 "is_configured": true, 00:13:53.693 "data_offset": 2048, 00:13:53.693 "data_size": 63488 00:13:53.693 }, 00:13:53.693 { 00:13:53.693 "name": "BaseBdev4", 00:13:53.693 "uuid": "7779bdd7-4787-4f93-a6e0-e8397e83ad38", 00:13:53.693 "is_configured": true, 00:13:53.693 "data_offset": 2048, 00:13:53.693 "data_size": 63488 00:13:53.693 } 00:13:53.693 ] 00:13:53.693 }' 00:13:53.693 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.694 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.262 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:54.262 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.262 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.262 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.262 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.262 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:54.262 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:54.262 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.262 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.262 [2024-12-06 11:56:51.771464] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:54.262 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.262 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:54.262 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.262 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.262 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:54.262 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.262 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:54.262 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.262 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.262 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.262 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.262 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.262 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.262 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.262 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.262 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.262 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.262 "name": "Existed_Raid", 00:13:54.262 "uuid": "275d76b8-2ceb-4d30-8f77-158a612eaa6b", 00:13:54.262 "strip_size_kb": 64, 00:13:54.262 "state": "configuring", 00:13:54.262 "raid_level": "raid5f", 00:13:54.262 "superblock": true, 00:13:54.262 "num_base_bdevs": 4, 00:13:54.262 "num_base_bdevs_discovered": 2, 00:13:54.262 "num_base_bdevs_operational": 4, 00:13:54.262 "base_bdevs_list": [ 00:13:54.262 { 00:13:54.262 "name": "BaseBdev1", 00:13:54.262 "uuid": "7f2bc200-cd4c-4a6d-8fde-681eab65c577", 00:13:54.262 "is_configured": true, 00:13:54.262 "data_offset": 2048, 00:13:54.262 "data_size": 63488 00:13:54.262 }, 00:13:54.262 { 00:13:54.262 "name": null, 00:13:54.262 "uuid": "470ed1ba-4f09-4d09-b789-7ca3911178d4", 00:13:54.262 "is_configured": false, 00:13:54.262 "data_offset": 0, 00:13:54.262 "data_size": 63488 00:13:54.262 }, 00:13:54.262 { 00:13:54.262 "name": null, 00:13:54.262 "uuid": "b40d531a-eaa1-4842-bf9d-3593b8d4dcf6", 00:13:54.262 "is_configured": false, 00:13:54.262 "data_offset": 0, 00:13:54.262 "data_size": 63488 00:13:54.262 }, 00:13:54.262 { 00:13:54.262 "name": "BaseBdev4", 00:13:54.262 "uuid": "7779bdd7-4787-4f93-a6e0-e8397e83ad38", 00:13:54.262 "is_configured": true, 00:13:54.262 "data_offset": 2048, 00:13:54.262 "data_size": 63488 00:13:54.262 } 00:13:54.262 ] 00:13:54.262 }' 00:13:54.262 11:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.262 11:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.521 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:54.521 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.521 11:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.521 11:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.521 11:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.521 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:54.521 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:54.521 11:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.521 11:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.521 [2024-12-06 11:56:52.242757] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:54.521 11:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.521 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:54.521 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.521 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.521 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:54.521 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.521 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:54.521 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.521 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.521 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.521 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.521 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.521 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.521 11:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.521 11:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.521 11:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.780 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.780 "name": "Existed_Raid", 00:13:54.780 "uuid": "275d76b8-2ceb-4d30-8f77-158a612eaa6b", 00:13:54.780 "strip_size_kb": 64, 00:13:54.780 "state": "configuring", 00:13:54.780 "raid_level": "raid5f", 00:13:54.780 "superblock": true, 00:13:54.781 "num_base_bdevs": 4, 00:13:54.781 "num_base_bdevs_discovered": 3, 00:13:54.781 "num_base_bdevs_operational": 4, 00:13:54.781 "base_bdevs_list": [ 00:13:54.781 { 00:13:54.781 "name": "BaseBdev1", 00:13:54.781 "uuid": "7f2bc200-cd4c-4a6d-8fde-681eab65c577", 00:13:54.781 "is_configured": true, 00:13:54.781 "data_offset": 2048, 00:13:54.781 "data_size": 63488 00:13:54.781 }, 00:13:54.781 { 00:13:54.781 "name": null, 00:13:54.781 "uuid": "470ed1ba-4f09-4d09-b789-7ca3911178d4", 00:13:54.781 "is_configured": false, 00:13:54.781 "data_offset": 0, 00:13:54.781 "data_size": 63488 00:13:54.781 }, 00:13:54.781 { 00:13:54.781 "name": "BaseBdev3", 00:13:54.781 "uuid": "b40d531a-eaa1-4842-bf9d-3593b8d4dcf6", 00:13:54.781 "is_configured": true, 00:13:54.781 "data_offset": 2048, 00:13:54.781 "data_size": 63488 00:13:54.781 }, 00:13:54.781 { 00:13:54.781 "name": "BaseBdev4", 00:13:54.781 "uuid": "7779bdd7-4787-4f93-a6e0-e8397e83ad38", 00:13:54.781 "is_configured": true, 00:13:54.781 "data_offset": 2048, 00:13:54.781 "data_size": 63488 00:13:54.781 } 00:13:54.781 ] 00:13:54.781 }' 00:13:54.781 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.781 11:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.040 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.040 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:55.040 11:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.040 11:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.040 11:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.040 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:55.040 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:55.040 11:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.040 11:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.040 [2024-12-06 11:56:52.721948] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:55.040 11:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.040 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:55.040 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.040 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.040 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.040 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.040 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.040 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.040 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.040 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.040 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.040 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.040 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.040 11:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.040 11:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.040 11:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.040 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.040 "name": "Existed_Raid", 00:13:55.040 "uuid": "275d76b8-2ceb-4d30-8f77-158a612eaa6b", 00:13:55.040 "strip_size_kb": 64, 00:13:55.040 "state": "configuring", 00:13:55.040 "raid_level": "raid5f", 00:13:55.040 "superblock": true, 00:13:55.040 "num_base_bdevs": 4, 00:13:55.040 "num_base_bdevs_discovered": 2, 00:13:55.040 "num_base_bdevs_operational": 4, 00:13:55.040 "base_bdevs_list": [ 00:13:55.040 { 00:13:55.040 "name": null, 00:13:55.040 "uuid": "7f2bc200-cd4c-4a6d-8fde-681eab65c577", 00:13:55.040 "is_configured": false, 00:13:55.040 "data_offset": 0, 00:13:55.040 "data_size": 63488 00:13:55.040 }, 00:13:55.040 { 00:13:55.040 "name": null, 00:13:55.040 "uuid": "470ed1ba-4f09-4d09-b789-7ca3911178d4", 00:13:55.040 "is_configured": false, 00:13:55.040 "data_offset": 0, 00:13:55.040 "data_size": 63488 00:13:55.040 }, 00:13:55.040 { 00:13:55.040 "name": "BaseBdev3", 00:13:55.040 "uuid": "b40d531a-eaa1-4842-bf9d-3593b8d4dcf6", 00:13:55.040 "is_configured": true, 00:13:55.040 "data_offset": 2048, 00:13:55.040 "data_size": 63488 00:13:55.040 }, 00:13:55.040 { 00:13:55.040 "name": "BaseBdev4", 00:13:55.040 "uuid": "7779bdd7-4787-4f93-a6e0-e8397e83ad38", 00:13:55.040 "is_configured": true, 00:13:55.040 "data_offset": 2048, 00:13:55.040 "data_size": 63488 00:13:55.040 } 00:13:55.040 ] 00:13:55.040 }' 00:13:55.040 11:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.040 11:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.610 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.610 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:55.610 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.610 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.610 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.610 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:55.610 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:55.610 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.610 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.610 [2024-12-06 11:56:53.235557] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:55.610 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.610 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:55.610 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.610 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.610 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.610 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.610 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.610 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.610 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.610 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.610 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.610 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.610 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.610 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.610 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.610 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.610 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.610 "name": "Existed_Raid", 00:13:55.610 "uuid": "275d76b8-2ceb-4d30-8f77-158a612eaa6b", 00:13:55.610 "strip_size_kb": 64, 00:13:55.610 "state": "configuring", 00:13:55.610 "raid_level": "raid5f", 00:13:55.610 "superblock": true, 00:13:55.610 "num_base_bdevs": 4, 00:13:55.610 "num_base_bdevs_discovered": 3, 00:13:55.610 "num_base_bdevs_operational": 4, 00:13:55.610 "base_bdevs_list": [ 00:13:55.610 { 00:13:55.610 "name": null, 00:13:55.610 "uuid": "7f2bc200-cd4c-4a6d-8fde-681eab65c577", 00:13:55.610 "is_configured": false, 00:13:55.610 "data_offset": 0, 00:13:55.610 "data_size": 63488 00:13:55.610 }, 00:13:55.610 { 00:13:55.610 "name": "BaseBdev2", 00:13:55.610 "uuid": "470ed1ba-4f09-4d09-b789-7ca3911178d4", 00:13:55.610 "is_configured": true, 00:13:55.610 "data_offset": 2048, 00:13:55.610 "data_size": 63488 00:13:55.610 }, 00:13:55.610 { 00:13:55.610 "name": "BaseBdev3", 00:13:55.610 "uuid": "b40d531a-eaa1-4842-bf9d-3593b8d4dcf6", 00:13:55.610 "is_configured": true, 00:13:55.610 "data_offset": 2048, 00:13:55.610 "data_size": 63488 00:13:55.610 }, 00:13:55.610 { 00:13:55.610 "name": "BaseBdev4", 00:13:55.610 "uuid": "7779bdd7-4787-4f93-a6e0-e8397e83ad38", 00:13:55.610 "is_configured": true, 00:13:55.610 "data_offset": 2048, 00:13:55.610 "data_size": 63488 00:13:55.610 } 00:13:55.610 ] 00:13:55.610 }' 00:13:55.610 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.610 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7f2bc200-cd4c-4a6d-8fde-681eab65c577 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.182 [2024-12-06 11:56:53.761337] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:56.182 [2024-12-06 11:56:53.761589] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:13:56.182 [2024-12-06 11:56:53.761643] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:56.182 [2024-12-06 11:56:53.761915] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:13:56.182 NewBaseBdev 00:13:56.182 [2024-12-06 11:56:53.762391] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:13:56.182 [2024-12-06 11:56:53.762453] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:13:56.182 [2024-12-06 11:56:53.762553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.182 [ 00:13:56.182 { 00:13:56.182 "name": "NewBaseBdev", 00:13:56.182 "aliases": [ 00:13:56.182 "7f2bc200-cd4c-4a6d-8fde-681eab65c577" 00:13:56.182 ], 00:13:56.182 "product_name": "Malloc disk", 00:13:56.182 "block_size": 512, 00:13:56.182 "num_blocks": 65536, 00:13:56.182 "uuid": "7f2bc200-cd4c-4a6d-8fde-681eab65c577", 00:13:56.182 "assigned_rate_limits": { 00:13:56.182 "rw_ios_per_sec": 0, 00:13:56.182 "rw_mbytes_per_sec": 0, 00:13:56.182 "r_mbytes_per_sec": 0, 00:13:56.182 "w_mbytes_per_sec": 0 00:13:56.182 }, 00:13:56.182 "claimed": true, 00:13:56.182 "claim_type": "exclusive_write", 00:13:56.182 "zoned": false, 00:13:56.182 "supported_io_types": { 00:13:56.182 "read": true, 00:13:56.182 "write": true, 00:13:56.182 "unmap": true, 00:13:56.182 "flush": true, 00:13:56.182 "reset": true, 00:13:56.182 "nvme_admin": false, 00:13:56.182 "nvme_io": false, 00:13:56.182 "nvme_io_md": false, 00:13:56.182 "write_zeroes": true, 00:13:56.182 "zcopy": true, 00:13:56.182 "get_zone_info": false, 00:13:56.182 "zone_management": false, 00:13:56.182 "zone_append": false, 00:13:56.182 "compare": false, 00:13:56.182 "compare_and_write": false, 00:13:56.182 "abort": true, 00:13:56.182 "seek_hole": false, 00:13:56.182 "seek_data": false, 00:13:56.182 "copy": true, 00:13:56.182 "nvme_iov_md": false 00:13:56.182 }, 00:13:56.182 "memory_domains": [ 00:13:56.182 { 00:13:56.182 "dma_device_id": "system", 00:13:56.182 "dma_device_type": 1 00:13:56.182 }, 00:13:56.182 { 00:13:56.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.182 "dma_device_type": 2 00:13:56.182 } 00:13:56.182 ], 00:13:56.182 "driver_specific": {} 00:13:56.182 } 00:13:56.182 ] 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.182 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.182 "name": "Existed_Raid", 00:13:56.182 "uuid": "275d76b8-2ceb-4d30-8f77-158a612eaa6b", 00:13:56.182 "strip_size_kb": 64, 00:13:56.182 "state": "online", 00:13:56.182 "raid_level": "raid5f", 00:13:56.182 "superblock": true, 00:13:56.182 "num_base_bdevs": 4, 00:13:56.182 "num_base_bdevs_discovered": 4, 00:13:56.182 "num_base_bdevs_operational": 4, 00:13:56.182 "base_bdevs_list": [ 00:13:56.182 { 00:13:56.182 "name": "NewBaseBdev", 00:13:56.183 "uuid": "7f2bc200-cd4c-4a6d-8fde-681eab65c577", 00:13:56.183 "is_configured": true, 00:13:56.183 "data_offset": 2048, 00:13:56.183 "data_size": 63488 00:13:56.183 }, 00:13:56.183 { 00:13:56.183 "name": "BaseBdev2", 00:13:56.183 "uuid": "470ed1ba-4f09-4d09-b789-7ca3911178d4", 00:13:56.183 "is_configured": true, 00:13:56.183 "data_offset": 2048, 00:13:56.183 "data_size": 63488 00:13:56.183 }, 00:13:56.183 { 00:13:56.183 "name": "BaseBdev3", 00:13:56.183 "uuid": "b40d531a-eaa1-4842-bf9d-3593b8d4dcf6", 00:13:56.183 "is_configured": true, 00:13:56.183 "data_offset": 2048, 00:13:56.183 "data_size": 63488 00:13:56.183 }, 00:13:56.183 { 00:13:56.183 "name": "BaseBdev4", 00:13:56.183 "uuid": "7779bdd7-4787-4f93-a6e0-e8397e83ad38", 00:13:56.183 "is_configured": true, 00:13:56.183 "data_offset": 2048, 00:13:56.183 "data_size": 63488 00:13:56.183 } 00:13:56.183 ] 00:13:56.183 }' 00:13:56.183 11:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.183 11:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.753 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:56.753 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:56.753 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:56.753 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:56.753 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:56.753 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:56.753 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:56.753 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:56.753 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.753 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.753 [2024-12-06 11:56:54.216746] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:56.753 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.753 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:56.753 "name": "Existed_Raid", 00:13:56.753 "aliases": [ 00:13:56.753 "275d76b8-2ceb-4d30-8f77-158a612eaa6b" 00:13:56.753 ], 00:13:56.753 "product_name": "Raid Volume", 00:13:56.753 "block_size": 512, 00:13:56.753 "num_blocks": 190464, 00:13:56.753 "uuid": "275d76b8-2ceb-4d30-8f77-158a612eaa6b", 00:13:56.753 "assigned_rate_limits": { 00:13:56.753 "rw_ios_per_sec": 0, 00:13:56.753 "rw_mbytes_per_sec": 0, 00:13:56.753 "r_mbytes_per_sec": 0, 00:13:56.753 "w_mbytes_per_sec": 0 00:13:56.753 }, 00:13:56.753 "claimed": false, 00:13:56.753 "zoned": false, 00:13:56.753 "supported_io_types": { 00:13:56.753 "read": true, 00:13:56.753 "write": true, 00:13:56.753 "unmap": false, 00:13:56.753 "flush": false, 00:13:56.753 "reset": true, 00:13:56.753 "nvme_admin": false, 00:13:56.753 "nvme_io": false, 00:13:56.753 "nvme_io_md": false, 00:13:56.753 "write_zeroes": true, 00:13:56.753 "zcopy": false, 00:13:56.753 "get_zone_info": false, 00:13:56.753 "zone_management": false, 00:13:56.753 "zone_append": false, 00:13:56.753 "compare": false, 00:13:56.753 "compare_and_write": false, 00:13:56.753 "abort": false, 00:13:56.753 "seek_hole": false, 00:13:56.753 "seek_data": false, 00:13:56.753 "copy": false, 00:13:56.753 "nvme_iov_md": false 00:13:56.753 }, 00:13:56.753 "driver_specific": { 00:13:56.753 "raid": { 00:13:56.753 "uuid": "275d76b8-2ceb-4d30-8f77-158a612eaa6b", 00:13:56.753 "strip_size_kb": 64, 00:13:56.753 "state": "online", 00:13:56.753 "raid_level": "raid5f", 00:13:56.753 "superblock": true, 00:13:56.753 "num_base_bdevs": 4, 00:13:56.753 "num_base_bdevs_discovered": 4, 00:13:56.753 "num_base_bdevs_operational": 4, 00:13:56.754 "base_bdevs_list": [ 00:13:56.754 { 00:13:56.754 "name": "NewBaseBdev", 00:13:56.754 "uuid": "7f2bc200-cd4c-4a6d-8fde-681eab65c577", 00:13:56.754 "is_configured": true, 00:13:56.754 "data_offset": 2048, 00:13:56.754 "data_size": 63488 00:13:56.754 }, 00:13:56.754 { 00:13:56.754 "name": "BaseBdev2", 00:13:56.754 "uuid": "470ed1ba-4f09-4d09-b789-7ca3911178d4", 00:13:56.754 "is_configured": true, 00:13:56.754 "data_offset": 2048, 00:13:56.754 "data_size": 63488 00:13:56.754 }, 00:13:56.754 { 00:13:56.754 "name": "BaseBdev3", 00:13:56.754 "uuid": "b40d531a-eaa1-4842-bf9d-3593b8d4dcf6", 00:13:56.754 "is_configured": true, 00:13:56.754 "data_offset": 2048, 00:13:56.754 "data_size": 63488 00:13:56.754 }, 00:13:56.754 { 00:13:56.754 "name": "BaseBdev4", 00:13:56.754 "uuid": "7779bdd7-4787-4f93-a6e0-e8397e83ad38", 00:13:56.754 "is_configured": true, 00:13:56.754 "data_offset": 2048, 00:13:56.754 "data_size": 63488 00:13:56.754 } 00:13:56.754 ] 00:13:56.754 } 00:13:56.754 } 00:13:56.754 }' 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:56.754 BaseBdev2 00:13:56.754 BaseBdev3 00:13:56.754 BaseBdev4' 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.754 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.014 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.014 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:57.014 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:57.014 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:57.014 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.014 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.014 [2024-12-06 11:56:54.548042] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:57.014 [2024-12-06 11:56:54.548068] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:57.014 [2024-12-06 11:56:54.548127] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:57.014 [2024-12-06 11:56:54.548374] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:57.014 [2024-12-06 11:56:54.548385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:13:57.014 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.014 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 93494 00:13:57.014 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 93494 ']' 00:13:57.014 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 93494 00:13:57.014 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:57.014 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:57.014 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93494 00:13:57.014 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:57.014 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:57.014 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93494' 00:13:57.014 killing process with pid 93494 00:13:57.014 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 93494 00:13:57.014 [2024-12-06 11:56:54.598429] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:57.014 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 93494 00:13:57.014 [2024-12-06 11:56:54.639319] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:57.274 11:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:57.274 00:13:57.274 real 0m9.646s 00:13:57.274 user 0m16.475s 00:13:57.274 sys 0m2.113s 00:13:57.274 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:57.274 11:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.274 ************************************ 00:13:57.274 END TEST raid5f_state_function_test_sb 00:13:57.274 ************************************ 00:13:57.274 11:56:54 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:13:57.274 11:56:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:57.274 11:56:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:57.274 11:56:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:57.274 ************************************ 00:13:57.274 START TEST raid5f_superblock_test 00:13:57.274 ************************************ 00:13:57.274 11:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:13:57.274 11:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:13:57.274 11:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:57.274 11:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:57.274 11:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:57.274 11:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:57.274 11:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:57.274 11:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:57.274 11:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:57.274 11:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:57.274 11:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:57.274 11:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:57.274 11:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:57.274 11:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:57.274 11:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:13:57.274 11:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:57.274 11:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:57.274 11:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=94141 00:13:57.274 11:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:57.274 11:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 94141 00:13:57.274 11:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 94141 ']' 00:13:57.274 11:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.274 11:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:57.274 11:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.274 11:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:57.274 11:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.534 [2024-12-06 11:56:55.050366] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:57.534 [2024-12-06 11:56:55.050626] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94141 ] 00:13:57.534 [2024-12-06 11:56:55.200166] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.534 [2024-12-06 11:56:55.245361] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.534 [2024-12-06 11:56:55.287390] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:57.534 [2024-12-06 11:56:55.287438] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.102 11:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:58.102 11:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:58.102 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:58.102 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:58.102 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:58.102 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:58.102 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:58.103 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:58.103 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:58.103 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:58.103 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.362 malloc1 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.362 [2024-12-06 11:56:55.881267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:58.362 [2024-12-06 11:56:55.881414] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.362 [2024-12-06 11:56:55.881448] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:58.362 [2024-12-06 11:56:55.881479] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.362 [2024-12-06 11:56:55.883501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.362 [2024-12-06 11:56:55.883586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:58.362 pt1 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.362 malloc2 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.362 [2024-12-06 11:56:55.927079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:58.362 [2024-12-06 11:56:55.927185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.362 [2024-12-06 11:56:55.927221] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:58.362 [2024-12-06 11:56:55.927287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.362 [2024-12-06 11:56:55.931987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.362 [2024-12-06 11:56:55.932069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:58.362 pt2 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.362 malloc3 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.362 [2024-12-06 11:56:55.957966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:58.362 [2024-12-06 11:56:55.958092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.362 [2024-12-06 11:56:55.958125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:58.362 [2024-12-06 11:56:55.958154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.362 [2024-12-06 11:56:55.960181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.362 [2024-12-06 11:56:55.960285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:58.362 pt3 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.362 11:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.363 malloc4 00:13:58.363 11:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.363 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:58.363 11:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.363 11:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.363 [2024-12-06 11:56:55.990358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:58.363 [2024-12-06 11:56:55.990470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.363 [2024-12-06 11:56:55.990501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:58.363 [2024-12-06 11:56:55.990531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.363 [2024-12-06 11:56:55.992541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.363 [2024-12-06 11:56:55.992626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:58.363 pt4 00:13:58.363 11:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.363 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:58.363 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:58.363 11:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:58.363 11:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.363 11:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.363 [2024-12-06 11:56:56.002389] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:58.363 [2024-12-06 11:56:56.004201] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:58.363 [2024-12-06 11:56:56.004336] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:58.363 [2024-12-06 11:56:56.004399] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:58.363 [2024-12-06 11:56:56.004573] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:58.363 [2024-12-06 11:56:56.004627] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:58.363 [2024-12-06 11:56:56.004875] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:58.363 [2024-12-06 11:56:56.005361] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:58.363 [2024-12-06 11:56:56.005410] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:58.363 [2024-12-06 11:56:56.005569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.363 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.363 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:13:58.363 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.363 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.363 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:58.363 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.363 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:58.363 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.363 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.363 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.363 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.363 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.363 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.363 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.363 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.363 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.363 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.363 "name": "raid_bdev1", 00:13:58.363 "uuid": "ab774df8-23b6-4067-92e8-329472b6c13c", 00:13:58.363 "strip_size_kb": 64, 00:13:58.363 "state": "online", 00:13:58.363 "raid_level": "raid5f", 00:13:58.363 "superblock": true, 00:13:58.363 "num_base_bdevs": 4, 00:13:58.363 "num_base_bdevs_discovered": 4, 00:13:58.363 "num_base_bdevs_operational": 4, 00:13:58.363 "base_bdevs_list": [ 00:13:58.363 { 00:13:58.363 "name": "pt1", 00:13:58.363 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:58.363 "is_configured": true, 00:13:58.363 "data_offset": 2048, 00:13:58.363 "data_size": 63488 00:13:58.363 }, 00:13:58.363 { 00:13:58.363 "name": "pt2", 00:13:58.363 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:58.363 "is_configured": true, 00:13:58.363 "data_offset": 2048, 00:13:58.363 "data_size": 63488 00:13:58.363 }, 00:13:58.363 { 00:13:58.363 "name": "pt3", 00:13:58.363 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:58.363 "is_configured": true, 00:13:58.363 "data_offset": 2048, 00:13:58.363 "data_size": 63488 00:13:58.363 }, 00:13:58.363 { 00:13:58.363 "name": "pt4", 00:13:58.363 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:58.363 "is_configured": true, 00:13:58.363 "data_offset": 2048, 00:13:58.363 "data_size": 63488 00:13:58.363 } 00:13:58.363 ] 00:13:58.363 }' 00:13:58.363 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.363 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.931 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:58.931 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:58.931 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:58.931 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:58.931 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:58.931 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:58.931 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:58.931 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:58.931 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.931 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.931 [2024-12-06 11:56:56.478650] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:58.931 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.931 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:58.931 "name": "raid_bdev1", 00:13:58.931 "aliases": [ 00:13:58.931 "ab774df8-23b6-4067-92e8-329472b6c13c" 00:13:58.931 ], 00:13:58.931 "product_name": "Raid Volume", 00:13:58.931 "block_size": 512, 00:13:58.931 "num_blocks": 190464, 00:13:58.931 "uuid": "ab774df8-23b6-4067-92e8-329472b6c13c", 00:13:58.931 "assigned_rate_limits": { 00:13:58.931 "rw_ios_per_sec": 0, 00:13:58.931 "rw_mbytes_per_sec": 0, 00:13:58.931 "r_mbytes_per_sec": 0, 00:13:58.931 "w_mbytes_per_sec": 0 00:13:58.931 }, 00:13:58.931 "claimed": false, 00:13:58.931 "zoned": false, 00:13:58.931 "supported_io_types": { 00:13:58.931 "read": true, 00:13:58.931 "write": true, 00:13:58.931 "unmap": false, 00:13:58.931 "flush": false, 00:13:58.931 "reset": true, 00:13:58.931 "nvme_admin": false, 00:13:58.931 "nvme_io": false, 00:13:58.931 "nvme_io_md": false, 00:13:58.931 "write_zeroes": true, 00:13:58.931 "zcopy": false, 00:13:58.931 "get_zone_info": false, 00:13:58.931 "zone_management": false, 00:13:58.931 "zone_append": false, 00:13:58.931 "compare": false, 00:13:58.931 "compare_and_write": false, 00:13:58.932 "abort": false, 00:13:58.932 "seek_hole": false, 00:13:58.932 "seek_data": false, 00:13:58.932 "copy": false, 00:13:58.932 "nvme_iov_md": false 00:13:58.932 }, 00:13:58.932 "driver_specific": { 00:13:58.932 "raid": { 00:13:58.932 "uuid": "ab774df8-23b6-4067-92e8-329472b6c13c", 00:13:58.932 "strip_size_kb": 64, 00:13:58.932 "state": "online", 00:13:58.932 "raid_level": "raid5f", 00:13:58.932 "superblock": true, 00:13:58.932 "num_base_bdevs": 4, 00:13:58.932 "num_base_bdevs_discovered": 4, 00:13:58.932 "num_base_bdevs_operational": 4, 00:13:58.932 "base_bdevs_list": [ 00:13:58.932 { 00:13:58.932 "name": "pt1", 00:13:58.932 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:58.932 "is_configured": true, 00:13:58.932 "data_offset": 2048, 00:13:58.932 "data_size": 63488 00:13:58.932 }, 00:13:58.932 { 00:13:58.932 "name": "pt2", 00:13:58.932 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:58.932 "is_configured": true, 00:13:58.932 "data_offset": 2048, 00:13:58.932 "data_size": 63488 00:13:58.932 }, 00:13:58.932 { 00:13:58.932 "name": "pt3", 00:13:58.932 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:58.932 "is_configured": true, 00:13:58.932 "data_offset": 2048, 00:13:58.932 "data_size": 63488 00:13:58.932 }, 00:13:58.932 { 00:13:58.932 "name": "pt4", 00:13:58.932 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:58.932 "is_configured": true, 00:13:58.932 "data_offset": 2048, 00:13:58.932 "data_size": 63488 00:13:58.932 } 00:13:58.932 ] 00:13:58.932 } 00:13:58.932 } 00:13:58.932 }' 00:13:58.932 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:58.932 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:58.932 pt2 00:13:58.932 pt3 00:13:58.932 pt4' 00:13:58.932 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:58.932 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:58.932 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:58.932 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:58.932 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.932 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.932 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:58.932 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.932 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:58.932 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:58.932 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:58.932 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:58.932 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.932 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.932 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:58.932 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.192 [2024-12-06 11:56:56.790110] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ab774df8-23b6-4067-92e8-329472b6c13c 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ab774df8-23b6-4067-92e8-329472b6c13c ']' 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.192 [2024-12-06 11:56:56.821910] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:59.192 [2024-12-06 11:56:56.821936] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:59.192 [2024-12-06 11:56:56.821994] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:59.192 [2024-12-06 11:56:56.822066] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:59.192 [2024-12-06 11:56:56.822083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.192 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.452 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.452 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:59.452 11:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:59.452 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:59.452 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:59.452 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:59.452 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:59.452 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:59.452 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:59.452 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:59.452 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.452 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.452 [2024-12-06 11:56:56.989653] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:59.452 [2024-12-06 11:56:56.991539] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:59.452 [2024-12-06 11:56:56.991622] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:59.452 [2024-12-06 11:56:56.991675] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:59.452 [2024-12-06 11:56:56.991764] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:59.452 [2024-12-06 11:56:56.991829] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:59.452 [2024-12-06 11:56:56.991897] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:59.452 [2024-12-06 11:56:56.991950] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:59.452 [2024-12-06 11:56:56.992001] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:59.452 [2024-12-06 11:56:56.992054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:13:59.452 request: 00:13:59.452 { 00:13:59.452 "name": "raid_bdev1", 00:13:59.452 "raid_level": "raid5f", 00:13:59.452 "base_bdevs": [ 00:13:59.452 "malloc1", 00:13:59.452 "malloc2", 00:13:59.452 "malloc3", 00:13:59.452 "malloc4" 00:13:59.452 ], 00:13:59.452 "strip_size_kb": 64, 00:13:59.452 "superblock": false, 00:13:59.452 "method": "bdev_raid_create", 00:13:59.452 "req_id": 1 00:13:59.452 } 00:13:59.452 Got JSON-RPC error response 00:13:59.452 response: 00:13:59.452 { 00:13:59.452 "code": -17, 00:13:59.452 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:59.452 } 00:13:59.452 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:59.452 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:59.452 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:59.452 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:59.452 11:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:59.452 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.452 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:59.452 11:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.452 11:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.452 11:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.452 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:59.452 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:59.452 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:59.452 11:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.452 11:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.452 [2024-12-06 11:56:57.053505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:59.452 [2024-12-06 11:56:57.053607] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.452 [2024-12-06 11:56:57.053643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:59.452 [2024-12-06 11:56:57.053671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.452 [2024-12-06 11:56:57.055717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.452 [2024-12-06 11:56:57.055801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:59.452 [2024-12-06 11:56:57.055880] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:59.452 [2024-12-06 11:56:57.055936] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:59.452 pt1 00:13:59.452 11:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.452 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:13:59.452 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.452 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.452 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:59.452 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.452 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:59.452 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.452 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.452 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.452 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.452 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.452 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.452 11:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.452 11:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.452 11:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.452 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.452 "name": "raid_bdev1", 00:13:59.452 "uuid": "ab774df8-23b6-4067-92e8-329472b6c13c", 00:13:59.452 "strip_size_kb": 64, 00:13:59.452 "state": "configuring", 00:13:59.452 "raid_level": "raid5f", 00:13:59.452 "superblock": true, 00:13:59.452 "num_base_bdevs": 4, 00:13:59.452 "num_base_bdevs_discovered": 1, 00:13:59.452 "num_base_bdevs_operational": 4, 00:13:59.452 "base_bdevs_list": [ 00:13:59.452 { 00:13:59.452 "name": "pt1", 00:13:59.452 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:59.452 "is_configured": true, 00:13:59.452 "data_offset": 2048, 00:13:59.452 "data_size": 63488 00:13:59.452 }, 00:13:59.452 { 00:13:59.452 "name": null, 00:13:59.452 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:59.452 "is_configured": false, 00:13:59.452 "data_offset": 2048, 00:13:59.452 "data_size": 63488 00:13:59.452 }, 00:13:59.452 { 00:13:59.452 "name": null, 00:13:59.452 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:59.452 "is_configured": false, 00:13:59.452 "data_offset": 2048, 00:13:59.452 "data_size": 63488 00:13:59.452 }, 00:13:59.452 { 00:13:59.452 "name": null, 00:13:59.452 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:59.452 "is_configured": false, 00:13:59.452 "data_offset": 2048, 00:13:59.452 "data_size": 63488 00:13:59.452 } 00:13:59.452 ] 00:13:59.452 }' 00:13:59.452 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.452 11:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.021 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:00.021 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:00.021 11:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.021 11:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.021 [2024-12-06 11:56:57.504792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:00.021 [2024-12-06 11:56:57.504892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.021 [2024-12-06 11:56:57.504914] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:00.021 [2024-12-06 11:56:57.504923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.021 [2024-12-06 11:56:57.505281] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.021 [2024-12-06 11:56:57.505299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:00.021 [2024-12-06 11:56:57.505357] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:00.021 [2024-12-06 11:56:57.505384] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:00.021 pt2 00:14:00.021 11:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.021 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:00.021 11:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.021 11:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.021 [2024-12-06 11:56:57.516794] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:00.021 11:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.021 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:00.021 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.021 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.021 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.021 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.021 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.021 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.021 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.021 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.021 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.021 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.021 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.021 11:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.021 11:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.021 11:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.021 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.021 "name": "raid_bdev1", 00:14:00.021 "uuid": "ab774df8-23b6-4067-92e8-329472b6c13c", 00:14:00.021 "strip_size_kb": 64, 00:14:00.021 "state": "configuring", 00:14:00.021 "raid_level": "raid5f", 00:14:00.021 "superblock": true, 00:14:00.021 "num_base_bdevs": 4, 00:14:00.021 "num_base_bdevs_discovered": 1, 00:14:00.021 "num_base_bdevs_operational": 4, 00:14:00.021 "base_bdevs_list": [ 00:14:00.021 { 00:14:00.021 "name": "pt1", 00:14:00.021 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:00.021 "is_configured": true, 00:14:00.021 "data_offset": 2048, 00:14:00.021 "data_size": 63488 00:14:00.021 }, 00:14:00.021 { 00:14:00.021 "name": null, 00:14:00.021 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:00.021 "is_configured": false, 00:14:00.021 "data_offset": 0, 00:14:00.021 "data_size": 63488 00:14:00.021 }, 00:14:00.021 { 00:14:00.021 "name": null, 00:14:00.021 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:00.021 "is_configured": false, 00:14:00.021 "data_offset": 2048, 00:14:00.021 "data_size": 63488 00:14:00.021 }, 00:14:00.021 { 00:14:00.021 "name": null, 00:14:00.021 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:00.021 "is_configured": false, 00:14:00.021 "data_offset": 2048, 00:14:00.021 "data_size": 63488 00:14:00.021 } 00:14:00.021 ] 00:14:00.021 }' 00:14:00.021 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.021 11:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.281 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:00.281 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:00.281 11:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:00.281 11:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.281 11:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.281 [2024-12-06 11:56:57.996033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:00.281 [2024-12-06 11:56:57.996146] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.281 [2024-12-06 11:56:57.996180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:00.281 [2024-12-06 11:56:57.996209] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.281 [2024-12-06 11:56:57.996561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.281 [2024-12-06 11:56:57.996619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:00.281 [2024-12-06 11:56:57.996691] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:00.281 [2024-12-06 11:56:57.996737] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:00.281 pt2 00:14:00.281 11:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.281 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:00.281 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:00.281 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:00.281 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.281 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.281 [2024-12-06 11:56:58.007993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:00.281 [2024-12-06 11:56:58.008098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.281 [2024-12-06 11:56:58.008129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:00.281 [2024-12-06 11:56:58.008168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.281 [2024-12-06 11:56:58.008490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.281 [2024-12-06 11:56:58.008547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:00.281 [2024-12-06 11:56:58.008623] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:00.281 [2024-12-06 11:56:58.008667] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:00.281 pt3 00:14:00.281 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.281 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:00.281 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:00.281 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:00.281 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.281 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.281 [2024-12-06 11:56:58.020010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:00.281 [2024-12-06 11:56:58.020055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.281 [2024-12-06 11:56:58.020068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:00.281 [2024-12-06 11:56:58.020077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.281 [2024-12-06 11:56:58.020354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.281 [2024-12-06 11:56:58.020374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:00.281 [2024-12-06 11:56:58.020418] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:00.281 [2024-12-06 11:56:58.020435] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:00.281 [2024-12-06 11:56:58.020523] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:00.281 [2024-12-06 11:56:58.020534] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:00.281 [2024-12-06 11:56:58.020740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:14:00.281 [2024-12-06 11:56:58.021171] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:00.281 [2024-12-06 11:56:58.021204] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:14:00.281 [2024-12-06 11:56:58.021311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.281 pt4 00:14:00.281 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.281 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:00.281 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:00.281 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:00.281 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.281 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.281 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.281 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.281 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.281 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.281 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.281 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.281 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.281 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.281 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.281 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.281 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.541 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.541 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.541 "name": "raid_bdev1", 00:14:00.541 "uuid": "ab774df8-23b6-4067-92e8-329472b6c13c", 00:14:00.541 "strip_size_kb": 64, 00:14:00.541 "state": "online", 00:14:00.541 "raid_level": "raid5f", 00:14:00.541 "superblock": true, 00:14:00.541 "num_base_bdevs": 4, 00:14:00.541 "num_base_bdevs_discovered": 4, 00:14:00.541 "num_base_bdevs_operational": 4, 00:14:00.541 "base_bdevs_list": [ 00:14:00.541 { 00:14:00.541 "name": "pt1", 00:14:00.541 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:00.541 "is_configured": true, 00:14:00.541 "data_offset": 2048, 00:14:00.541 "data_size": 63488 00:14:00.541 }, 00:14:00.541 { 00:14:00.541 "name": "pt2", 00:14:00.541 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:00.541 "is_configured": true, 00:14:00.541 "data_offset": 2048, 00:14:00.541 "data_size": 63488 00:14:00.541 }, 00:14:00.541 { 00:14:00.541 "name": "pt3", 00:14:00.541 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:00.541 "is_configured": true, 00:14:00.541 "data_offset": 2048, 00:14:00.541 "data_size": 63488 00:14:00.541 }, 00:14:00.541 { 00:14:00.541 "name": "pt4", 00:14:00.541 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:00.541 "is_configured": true, 00:14:00.541 "data_offset": 2048, 00:14:00.541 "data_size": 63488 00:14:00.541 } 00:14:00.541 ] 00:14:00.541 }' 00:14:00.541 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.541 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.800 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:00.800 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:00.800 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:00.800 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:00.800 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:00.800 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:00.800 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:00.800 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:00.800 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.800 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.800 [2024-12-06 11:56:58.483468] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:00.800 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.800 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:00.800 "name": "raid_bdev1", 00:14:00.800 "aliases": [ 00:14:00.800 "ab774df8-23b6-4067-92e8-329472b6c13c" 00:14:00.800 ], 00:14:00.800 "product_name": "Raid Volume", 00:14:00.800 "block_size": 512, 00:14:00.800 "num_blocks": 190464, 00:14:00.800 "uuid": "ab774df8-23b6-4067-92e8-329472b6c13c", 00:14:00.800 "assigned_rate_limits": { 00:14:00.800 "rw_ios_per_sec": 0, 00:14:00.800 "rw_mbytes_per_sec": 0, 00:14:00.800 "r_mbytes_per_sec": 0, 00:14:00.800 "w_mbytes_per_sec": 0 00:14:00.800 }, 00:14:00.800 "claimed": false, 00:14:00.800 "zoned": false, 00:14:00.801 "supported_io_types": { 00:14:00.801 "read": true, 00:14:00.801 "write": true, 00:14:00.801 "unmap": false, 00:14:00.801 "flush": false, 00:14:00.801 "reset": true, 00:14:00.801 "nvme_admin": false, 00:14:00.801 "nvme_io": false, 00:14:00.801 "nvme_io_md": false, 00:14:00.801 "write_zeroes": true, 00:14:00.801 "zcopy": false, 00:14:00.801 "get_zone_info": false, 00:14:00.801 "zone_management": false, 00:14:00.801 "zone_append": false, 00:14:00.801 "compare": false, 00:14:00.801 "compare_and_write": false, 00:14:00.801 "abort": false, 00:14:00.801 "seek_hole": false, 00:14:00.801 "seek_data": false, 00:14:00.801 "copy": false, 00:14:00.801 "nvme_iov_md": false 00:14:00.801 }, 00:14:00.801 "driver_specific": { 00:14:00.801 "raid": { 00:14:00.801 "uuid": "ab774df8-23b6-4067-92e8-329472b6c13c", 00:14:00.801 "strip_size_kb": 64, 00:14:00.801 "state": "online", 00:14:00.801 "raid_level": "raid5f", 00:14:00.801 "superblock": true, 00:14:00.801 "num_base_bdevs": 4, 00:14:00.801 "num_base_bdevs_discovered": 4, 00:14:00.801 "num_base_bdevs_operational": 4, 00:14:00.801 "base_bdevs_list": [ 00:14:00.801 { 00:14:00.801 "name": "pt1", 00:14:00.801 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:00.801 "is_configured": true, 00:14:00.801 "data_offset": 2048, 00:14:00.801 "data_size": 63488 00:14:00.801 }, 00:14:00.801 { 00:14:00.801 "name": "pt2", 00:14:00.801 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:00.801 "is_configured": true, 00:14:00.801 "data_offset": 2048, 00:14:00.801 "data_size": 63488 00:14:00.801 }, 00:14:00.801 { 00:14:00.801 "name": "pt3", 00:14:00.801 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:00.801 "is_configured": true, 00:14:00.801 "data_offset": 2048, 00:14:00.801 "data_size": 63488 00:14:00.801 }, 00:14:00.801 { 00:14:00.801 "name": "pt4", 00:14:00.801 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:00.801 "is_configured": true, 00:14:00.801 "data_offset": 2048, 00:14:00.801 "data_size": 63488 00:14:00.801 } 00:14:00.801 ] 00:14:00.801 } 00:14:00.801 } 00:14:00.801 }' 00:14:00.801 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:00.801 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:00.801 pt2 00:14:00.801 pt3 00:14:00.801 pt4' 00:14:00.801 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:01.061 [2024-12-06 11:56:58.770940] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ab774df8-23b6-4067-92e8-329472b6c13c '!=' ab774df8-23b6-4067-92e8-329472b6c13c ']' 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.061 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.061 [2024-12-06 11:56:58.810727] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:01.321 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.321 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:01.321 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.321 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.321 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.321 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.321 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.321 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.321 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.321 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.321 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.321 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.321 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.321 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.321 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.321 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.321 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.321 "name": "raid_bdev1", 00:14:01.321 "uuid": "ab774df8-23b6-4067-92e8-329472b6c13c", 00:14:01.321 "strip_size_kb": 64, 00:14:01.321 "state": "online", 00:14:01.321 "raid_level": "raid5f", 00:14:01.321 "superblock": true, 00:14:01.321 "num_base_bdevs": 4, 00:14:01.321 "num_base_bdevs_discovered": 3, 00:14:01.321 "num_base_bdevs_operational": 3, 00:14:01.321 "base_bdevs_list": [ 00:14:01.321 { 00:14:01.321 "name": null, 00:14:01.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.321 "is_configured": false, 00:14:01.321 "data_offset": 0, 00:14:01.321 "data_size": 63488 00:14:01.321 }, 00:14:01.321 { 00:14:01.321 "name": "pt2", 00:14:01.321 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:01.321 "is_configured": true, 00:14:01.321 "data_offset": 2048, 00:14:01.321 "data_size": 63488 00:14:01.321 }, 00:14:01.321 { 00:14:01.321 "name": "pt3", 00:14:01.321 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:01.321 "is_configured": true, 00:14:01.321 "data_offset": 2048, 00:14:01.321 "data_size": 63488 00:14:01.321 }, 00:14:01.321 { 00:14:01.321 "name": "pt4", 00:14:01.321 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:01.321 "is_configured": true, 00:14:01.321 "data_offset": 2048, 00:14:01.321 "data_size": 63488 00:14:01.321 } 00:14:01.321 ] 00:14:01.321 }' 00:14:01.321 11:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.321 11:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.580 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:01.580 11:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.580 11:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.580 [2024-12-06 11:56:59.309849] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:01.580 [2024-12-06 11:56:59.309916] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:01.580 [2024-12-06 11:56:59.309998] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:01.581 [2024-12-06 11:56:59.310066] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:01.581 [2024-12-06 11:56:59.310126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:14:01.581 11:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.581 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.581 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:01.581 11:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.581 11:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.581 11:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.840 [2024-12-06 11:56:59.405684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:01.840 [2024-12-06 11:56:59.405728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.840 [2024-12-06 11:56:59.405742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:01.840 [2024-12-06 11:56:59.405752] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.840 [2024-12-06 11:56:59.407762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.840 [2024-12-06 11:56:59.407801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:01.840 [2024-12-06 11:56:59.407860] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:01.840 [2024-12-06 11:56:59.407897] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:01.840 pt2 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.840 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.841 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.841 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.841 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.841 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.841 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.841 11:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.841 11:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.841 11:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.841 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.841 "name": "raid_bdev1", 00:14:01.841 "uuid": "ab774df8-23b6-4067-92e8-329472b6c13c", 00:14:01.841 "strip_size_kb": 64, 00:14:01.841 "state": "configuring", 00:14:01.841 "raid_level": "raid5f", 00:14:01.841 "superblock": true, 00:14:01.841 "num_base_bdevs": 4, 00:14:01.841 "num_base_bdevs_discovered": 1, 00:14:01.841 "num_base_bdevs_operational": 3, 00:14:01.841 "base_bdevs_list": [ 00:14:01.841 { 00:14:01.841 "name": null, 00:14:01.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.841 "is_configured": false, 00:14:01.841 "data_offset": 2048, 00:14:01.841 "data_size": 63488 00:14:01.841 }, 00:14:01.841 { 00:14:01.841 "name": "pt2", 00:14:01.841 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:01.841 "is_configured": true, 00:14:01.841 "data_offset": 2048, 00:14:01.841 "data_size": 63488 00:14:01.841 }, 00:14:01.841 { 00:14:01.841 "name": null, 00:14:01.841 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:01.841 "is_configured": false, 00:14:01.841 "data_offset": 2048, 00:14:01.841 "data_size": 63488 00:14:01.841 }, 00:14:01.841 { 00:14:01.841 "name": null, 00:14:01.841 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:01.841 "is_configured": false, 00:14:01.841 "data_offset": 2048, 00:14:01.841 "data_size": 63488 00:14:01.841 } 00:14:01.841 ] 00:14:01.841 }' 00:14:01.841 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.841 11:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.410 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:02.410 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:02.410 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:02.410 11:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.410 11:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.410 [2024-12-06 11:56:59.864941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:02.410 [2024-12-06 11:56:59.864997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.410 [2024-12-06 11:56:59.865013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:02.410 [2024-12-06 11:56:59.865024] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.410 [2024-12-06 11:56:59.865364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.410 [2024-12-06 11:56:59.865384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:02.410 [2024-12-06 11:56:59.865444] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:02.410 [2024-12-06 11:56:59.865465] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:02.410 pt3 00:14:02.410 11:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.410 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:02.410 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.410 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:02.410 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:02.410 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.410 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.410 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.410 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.410 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.410 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.410 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.410 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.410 11:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.410 11:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.410 11:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.410 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.410 "name": "raid_bdev1", 00:14:02.410 "uuid": "ab774df8-23b6-4067-92e8-329472b6c13c", 00:14:02.410 "strip_size_kb": 64, 00:14:02.410 "state": "configuring", 00:14:02.410 "raid_level": "raid5f", 00:14:02.410 "superblock": true, 00:14:02.410 "num_base_bdevs": 4, 00:14:02.410 "num_base_bdevs_discovered": 2, 00:14:02.410 "num_base_bdevs_operational": 3, 00:14:02.410 "base_bdevs_list": [ 00:14:02.410 { 00:14:02.410 "name": null, 00:14:02.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.410 "is_configured": false, 00:14:02.410 "data_offset": 2048, 00:14:02.410 "data_size": 63488 00:14:02.410 }, 00:14:02.410 { 00:14:02.410 "name": "pt2", 00:14:02.410 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:02.410 "is_configured": true, 00:14:02.410 "data_offset": 2048, 00:14:02.410 "data_size": 63488 00:14:02.410 }, 00:14:02.410 { 00:14:02.410 "name": "pt3", 00:14:02.410 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:02.410 "is_configured": true, 00:14:02.410 "data_offset": 2048, 00:14:02.410 "data_size": 63488 00:14:02.410 }, 00:14:02.410 { 00:14:02.410 "name": null, 00:14:02.410 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:02.410 "is_configured": false, 00:14:02.410 "data_offset": 2048, 00:14:02.410 "data_size": 63488 00:14:02.410 } 00:14:02.410 ] 00:14:02.410 }' 00:14:02.410 11:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.410 11:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.671 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:02.671 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:02.671 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:14:02.671 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:02.671 11:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.671 11:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.671 [2024-12-06 11:57:00.260279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:02.671 [2024-12-06 11:57:00.260388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.671 [2024-12-06 11:57:00.260424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:02.671 [2024-12-06 11:57:00.260453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.671 [2024-12-06 11:57:00.260828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.671 [2024-12-06 11:57:00.260883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:02.671 [2024-12-06 11:57:00.260966] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:02.671 [2024-12-06 11:57:00.261012] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:02.671 [2024-12-06 11:57:00.261119] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:14:02.671 [2024-12-06 11:57:00.261156] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:02.671 [2024-12-06 11:57:00.261394] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:14:02.671 [2024-12-06 11:57:00.261956] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:14:02.671 [2024-12-06 11:57:00.262006] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:14:02.671 [2024-12-06 11:57:00.262271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.671 pt4 00:14:02.671 11:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.671 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:02.671 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.671 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.671 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:02.671 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.671 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.671 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.671 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.671 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.671 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.671 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.671 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.671 11:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.671 11:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.671 11:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.671 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.671 "name": "raid_bdev1", 00:14:02.671 "uuid": "ab774df8-23b6-4067-92e8-329472b6c13c", 00:14:02.671 "strip_size_kb": 64, 00:14:02.671 "state": "online", 00:14:02.671 "raid_level": "raid5f", 00:14:02.671 "superblock": true, 00:14:02.671 "num_base_bdevs": 4, 00:14:02.671 "num_base_bdevs_discovered": 3, 00:14:02.671 "num_base_bdevs_operational": 3, 00:14:02.671 "base_bdevs_list": [ 00:14:02.671 { 00:14:02.671 "name": null, 00:14:02.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.671 "is_configured": false, 00:14:02.671 "data_offset": 2048, 00:14:02.671 "data_size": 63488 00:14:02.671 }, 00:14:02.671 { 00:14:02.671 "name": "pt2", 00:14:02.671 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:02.671 "is_configured": true, 00:14:02.671 "data_offset": 2048, 00:14:02.671 "data_size": 63488 00:14:02.671 }, 00:14:02.671 { 00:14:02.671 "name": "pt3", 00:14:02.671 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:02.671 "is_configured": true, 00:14:02.671 "data_offset": 2048, 00:14:02.671 "data_size": 63488 00:14:02.671 }, 00:14:02.671 { 00:14:02.671 "name": "pt4", 00:14:02.671 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:02.671 "is_configured": true, 00:14:02.671 "data_offset": 2048, 00:14:02.671 "data_size": 63488 00:14:02.671 } 00:14:02.671 ] 00:14:02.671 }' 00:14:02.671 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.671 11:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.931 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:02.931 11:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.931 11:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.191 [2024-12-06 11:57:00.691511] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:03.191 [2024-12-06 11:57:00.691590] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:03.191 [2024-12-06 11:57:00.691648] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.191 [2024-12-06 11:57:00.691712] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:03.191 [2024-12-06 11:57:00.691737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:14:03.191 11:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.191 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.191 11:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.191 11:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.191 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:03.191 11:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.191 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:03.191 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:03.191 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:14:03.191 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:14:03.191 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:14:03.191 11:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.191 11:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.191 11:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.191 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:03.191 11:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.191 11:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.191 [2024-12-06 11:57:00.767389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:03.191 [2024-12-06 11:57:00.767434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.191 [2024-12-06 11:57:00.767451] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:03.191 [2024-12-06 11:57:00.767459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.191 [2024-12-06 11:57:00.769582] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.191 [2024-12-06 11:57:00.769618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:03.191 [2024-12-06 11:57:00.769679] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:03.191 [2024-12-06 11:57:00.769716] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:03.191 [2024-12-06 11:57:00.769816] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:03.191 [2024-12-06 11:57:00.769829] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:03.191 [2024-12-06 11:57:00.769848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:14:03.191 [2024-12-06 11:57:00.769887] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:03.191 [2024-12-06 11:57:00.770000] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:03.191 pt1 00:14:03.191 11:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.191 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:14:03.191 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:03.191 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.191 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:03.191 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:03.191 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.191 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:03.191 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.191 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.191 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.192 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.192 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.192 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.192 11:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.192 11:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.192 11:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.192 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.192 "name": "raid_bdev1", 00:14:03.192 "uuid": "ab774df8-23b6-4067-92e8-329472b6c13c", 00:14:03.192 "strip_size_kb": 64, 00:14:03.192 "state": "configuring", 00:14:03.192 "raid_level": "raid5f", 00:14:03.192 "superblock": true, 00:14:03.192 "num_base_bdevs": 4, 00:14:03.192 "num_base_bdevs_discovered": 2, 00:14:03.192 "num_base_bdevs_operational": 3, 00:14:03.192 "base_bdevs_list": [ 00:14:03.192 { 00:14:03.192 "name": null, 00:14:03.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.192 "is_configured": false, 00:14:03.192 "data_offset": 2048, 00:14:03.192 "data_size": 63488 00:14:03.192 }, 00:14:03.192 { 00:14:03.192 "name": "pt2", 00:14:03.192 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:03.192 "is_configured": true, 00:14:03.192 "data_offset": 2048, 00:14:03.192 "data_size": 63488 00:14:03.192 }, 00:14:03.192 { 00:14:03.192 "name": "pt3", 00:14:03.192 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:03.192 "is_configured": true, 00:14:03.192 "data_offset": 2048, 00:14:03.192 "data_size": 63488 00:14:03.192 }, 00:14:03.192 { 00:14:03.192 "name": null, 00:14:03.192 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:03.192 "is_configured": false, 00:14:03.192 "data_offset": 2048, 00:14:03.192 "data_size": 63488 00:14:03.192 } 00:14:03.192 ] 00:14:03.192 }' 00:14:03.192 11:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.192 11:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.451 11:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:03.451 11:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:03.451 11:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.451 11:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.711 11:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.711 11:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:03.711 11:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:03.711 11:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.711 11:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.711 [2024-12-06 11:57:01.246598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:03.711 [2024-12-06 11:57:01.246693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.711 [2024-12-06 11:57:01.246725] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:03.711 [2024-12-06 11:57:01.246755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.711 [2024-12-06 11:57:01.247104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.711 [2024-12-06 11:57:01.247162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:03.711 [2024-12-06 11:57:01.247254] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:03.711 [2024-12-06 11:57:01.247326] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:03.711 [2024-12-06 11:57:01.247461] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:14:03.711 [2024-12-06 11:57:01.247503] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:03.712 [2024-12-06 11:57:01.247742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:14:03.712 [2024-12-06 11:57:01.248312] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:14:03.712 [2024-12-06 11:57:01.248361] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:14:03.712 [2024-12-06 11:57:01.248564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.712 pt4 00:14:03.712 11:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.712 11:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:03.712 11:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.712 11:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.712 11:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:03.712 11:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.712 11:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:03.712 11:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.712 11:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.712 11:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.712 11:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.712 11:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.712 11:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.712 11:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.712 11:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.712 11:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.712 11:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.712 "name": "raid_bdev1", 00:14:03.712 "uuid": "ab774df8-23b6-4067-92e8-329472b6c13c", 00:14:03.712 "strip_size_kb": 64, 00:14:03.712 "state": "online", 00:14:03.712 "raid_level": "raid5f", 00:14:03.712 "superblock": true, 00:14:03.712 "num_base_bdevs": 4, 00:14:03.712 "num_base_bdevs_discovered": 3, 00:14:03.712 "num_base_bdevs_operational": 3, 00:14:03.712 "base_bdevs_list": [ 00:14:03.712 { 00:14:03.712 "name": null, 00:14:03.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.712 "is_configured": false, 00:14:03.712 "data_offset": 2048, 00:14:03.712 "data_size": 63488 00:14:03.712 }, 00:14:03.712 { 00:14:03.712 "name": "pt2", 00:14:03.712 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:03.712 "is_configured": true, 00:14:03.712 "data_offset": 2048, 00:14:03.712 "data_size": 63488 00:14:03.712 }, 00:14:03.712 { 00:14:03.712 "name": "pt3", 00:14:03.712 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:03.712 "is_configured": true, 00:14:03.712 "data_offset": 2048, 00:14:03.712 "data_size": 63488 00:14:03.712 }, 00:14:03.712 { 00:14:03.712 "name": "pt4", 00:14:03.712 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:03.712 "is_configured": true, 00:14:03.712 "data_offset": 2048, 00:14:03.712 "data_size": 63488 00:14:03.712 } 00:14:03.712 ] 00:14:03.712 }' 00:14:03.712 11:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.712 11:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.972 11:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:03.972 11:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.972 11:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.972 11:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:03.972 11:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.232 11:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:04.232 11:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:04.232 11:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.232 11:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.232 11:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:04.232 [2024-12-06 11:57:01.737922] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:04.232 11:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.232 11:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' ab774df8-23b6-4067-92e8-329472b6c13c '!=' ab774df8-23b6-4067-92e8-329472b6c13c ']' 00:14:04.232 11:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 94141 00:14:04.232 11:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 94141 ']' 00:14:04.232 11:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 94141 00:14:04.232 11:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:04.232 11:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:04.232 11:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94141 00:14:04.232 killing process with pid 94141 00:14:04.232 11:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:04.232 11:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:04.232 11:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94141' 00:14:04.232 11:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 94141 00:14:04.232 [2024-12-06 11:57:01.797959] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:04.232 [2024-12-06 11:57:01.798022] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:04.232 [2024-12-06 11:57:01.798083] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:04.232 [2024-12-06 11:57:01.798091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:14:04.232 11:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 94141 00:14:04.232 [2024-12-06 11:57:01.841833] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:04.493 11:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:04.493 ************************************ 00:14:04.493 END TEST raid5f_superblock_test 00:14:04.493 ************************************ 00:14:04.493 00:14:04.493 real 0m7.124s 00:14:04.493 user 0m11.962s 00:14:04.493 sys 0m1.532s 00:14:04.493 11:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:04.493 11:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.493 11:57:02 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:04.493 11:57:02 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:14:04.493 11:57:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:04.493 11:57:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:04.493 11:57:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:04.493 ************************************ 00:14:04.493 START TEST raid5f_rebuild_test 00:14:04.493 ************************************ 00:14:04.493 11:57:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:14:04.493 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:04.493 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:04.493 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:04.493 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:04.493 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:04.493 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:04.493 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:04.493 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:04.493 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:04.493 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:04.493 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:04.493 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:04.493 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:04.493 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:04.493 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:04.493 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:04.493 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:04.493 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:04.493 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:04.493 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:04.493 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:04.493 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:04.493 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:04.493 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:04.493 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:04.494 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:04.494 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:04.494 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:04.494 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:04.494 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:04.494 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:04.494 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=94611 00:14:04.494 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:04.494 11:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 94611 00:14:04.494 11:57:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 94611 ']' 00:14:04.494 11:57:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.494 11:57:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:04.494 11:57:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.494 11:57:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:04.494 11:57:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.754 [2024-12-06 11:57:02.261141] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:04.754 [2024-12-06 11:57:02.261363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:04.754 Zero copy mechanism will not be used. 00:14:04.754 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94611 ] 00:14:04.754 [2024-12-06 11:57:02.405922] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.754 [2024-12-06 11:57:02.450299] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.754 [2024-12-06 11:57:02.492359] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.754 [2024-12-06 11:57:02.492464] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:05.324 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:05.324 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:14:05.324 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:05.324 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:05.324 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.324 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.585 BaseBdev1_malloc 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.585 [2024-12-06 11:57:03.086211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:05.585 [2024-12-06 11:57:03.086371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.585 [2024-12-06 11:57:03.086411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:05.585 [2024-12-06 11:57:03.086445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.585 [2024-12-06 11:57:03.088499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.585 [2024-12-06 11:57:03.088581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:05.585 BaseBdev1 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.585 BaseBdev2_malloc 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.585 [2024-12-06 11:57:03.135764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:05.585 [2024-12-06 11:57:03.135867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.585 [2024-12-06 11:57:03.135913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:05.585 [2024-12-06 11:57:03.135937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.585 [2024-12-06 11:57:03.140353] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.585 [2024-12-06 11:57:03.140406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:05.585 BaseBdev2 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.585 BaseBdev3_malloc 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.585 [2024-12-06 11:57:03.166482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:05.585 [2024-12-06 11:57:03.166625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.585 [2024-12-06 11:57:03.166667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:05.585 [2024-12-06 11:57:03.166697] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.585 [2024-12-06 11:57:03.168727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.585 [2024-12-06 11:57:03.168812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:05.585 BaseBdev3 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.585 BaseBdev4_malloc 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.585 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.585 [2024-12-06 11:57:03.194952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:05.585 [2024-12-06 11:57:03.194997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.585 [2024-12-06 11:57:03.195018] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:05.586 [2024-12-06 11:57:03.195026] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.586 [2024-12-06 11:57:03.197053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.586 [2024-12-06 11:57:03.197085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:05.586 BaseBdev4 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.586 spare_malloc 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.586 spare_delay 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.586 [2024-12-06 11:57:03.235303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:05.586 [2024-12-06 11:57:03.235431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.586 [2024-12-06 11:57:03.235455] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:05.586 [2024-12-06 11:57:03.235463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.586 [2024-12-06 11:57:03.237453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.586 [2024-12-06 11:57:03.237496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:05.586 spare 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.586 [2024-12-06 11:57:03.247358] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:05.586 [2024-12-06 11:57:03.249022] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:05.586 [2024-12-06 11:57:03.249085] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:05.586 [2024-12-06 11:57:03.249132] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:05.586 [2024-12-06 11:57:03.249215] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:05.586 [2024-12-06 11:57:03.249226] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:05.586 [2024-12-06 11:57:03.249537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:14:05.586 [2024-12-06 11:57:03.250002] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:05.586 [2024-12-06 11:57:03.250056] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:05.586 [2024-12-06 11:57:03.250199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.586 "name": "raid_bdev1", 00:14:05.586 "uuid": "aac665e9-9fbb-4f1a-94d3-8ad06870f337", 00:14:05.586 "strip_size_kb": 64, 00:14:05.586 "state": "online", 00:14:05.586 "raid_level": "raid5f", 00:14:05.586 "superblock": false, 00:14:05.586 "num_base_bdevs": 4, 00:14:05.586 "num_base_bdevs_discovered": 4, 00:14:05.586 "num_base_bdevs_operational": 4, 00:14:05.586 "base_bdevs_list": [ 00:14:05.586 { 00:14:05.586 "name": "BaseBdev1", 00:14:05.586 "uuid": "c2cc81e9-b1f3-5825-b63c-a8a62fef3772", 00:14:05.586 "is_configured": true, 00:14:05.586 "data_offset": 0, 00:14:05.586 "data_size": 65536 00:14:05.586 }, 00:14:05.586 { 00:14:05.586 "name": "BaseBdev2", 00:14:05.586 "uuid": "6efa45e2-63e5-50e0-892e-2b01f7970960", 00:14:05.586 "is_configured": true, 00:14:05.586 "data_offset": 0, 00:14:05.586 "data_size": 65536 00:14:05.586 }, 00:14:05.586 { 00:14:05.586 "name": "BaseBdev3", 00:14:05.586 "uuid": "4ac53d38-4281-5462-9bfd-19155a8fac00", 00:14:05.586 "is_configured": true, 00:14:05.586 "data_offset": 0, 00:14:05.586 "data_size": 65536 00:14:05.586 }, 00:14:05.586 { 00:14:05.586 "name": "BaseBdev4", 00:14:05.586 "uuid": "37430b25-91df-5f75-b32b-5c7a9a2f8a55", 00:14:05.586 "is_configured": true, 00:14:05.586 "data_offset": 0, 00:14:05.586 "data_size": 65536 00:14:05.586 } 00:14:05.586 ] 00:14:05.586 }' 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.586 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.157 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:06.157 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.157 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.157 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:06.157 [2024-12-06 11:57:03.663453] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:06.157 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.157 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:14:06.157 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:06.157 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.157 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.157 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.157 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.157 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:06.157 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:06.157 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:06.157 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:06.157 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:06.157 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:06.157 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:06.157 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:06.157 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:06.157 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:06.157 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:06.157 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:06.157 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:06.157 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:06.157 [2024-12-06 11:57:03.902922] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:14:06.418 /dev/nbd0 00:14:06.418 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:06.418 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:06.418 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:06.418 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:06.418 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:06.418 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:06.418 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:06.418 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:06.418 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:06.418 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:06.418 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.418 1+0 records in 00:14:06.418 1+0 records out 00:14:06.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00017 s, 24.1 MB/s 00:14:06.418 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.418 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:06.418 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.418 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:06.418 11:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:06.418 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.418 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:06.418 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:06.418 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:14:06.418 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:14:06.418 11:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:14:06.678 512+0 records in 00:14:06.678 512+0 records out 00:14:06.678 100663296 bytes (101 MB, 96 MiB) copied, 0.42529 s, 237 MB/s 00:14:06.678 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:06.678 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:06.678 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:06.678 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:06.678 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:06.678 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:06.678 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:06.938 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:06.938 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:06.938 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:06.938 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:06.938 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:06.938 [2024-12-06 11:57:04.607186] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.938 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:06.938 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:06.938 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:06.938 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:06.938 11:57:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.938 11:57:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.938 [2024-12-06 11:57:04.615267] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:06.938 11:57:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.938 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:06.938 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.938 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.938 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:06.938 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.939 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.939 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.939 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.939 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.939 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.939 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.939 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.939 11:57:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.939 11:57:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.939 11:57:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.939 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.939 "name": "raid_bdev1", 00:14:06.939 "uuid": "aac665e9-9fbb-4f1a-94d3-8ad06870f337", 00:14:06.939 "strip_size_kb": 64, 00:14:06.939 "state": "online", 00:14:06.939 "raid_level": "raid5f", 00:14:06.939 "superblock": false, 00:14:06.939 "num_base_bdevs": 4, 00:14:06.939 "num_base_bdevs_discovered": 3, 00:14:06.939 "num_base_bdevs_operational": 3, 00:14:06.939 "base_bdevs_list": [ 00:14:06.939 { 00:14:06.939 "name": null, 00:14:06.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.939 "is_configured": false, 00:14:06.939 "data_offset": 0, 00:14:06.939 "data_size": 65536 00:14:06.939 }, 00:14:06.939 { 00:14:06.939 "name": "BaseBdev2", 00:14:06.939 "uuid": "6efa45e2-63e5-50e0-892e-2b01f7970960", 00:14:06.939 "is_configured": true, 00:14:06.939 "data_offset": 0, 00:14:06.939 "data_size": 65536 00:14:06.939 }, 00:14:06.939 { 00:14:06.939 "name": "BaseBdev3", 00:14:06.939 "uuid": "4ac53d38-4281-5462-9bfd-19155a8fac00", 00:14:06.939 "is_configured": true, 00:14:06.939 "data_offset": 0, 00:14:06.939 "data_size": 65536 00:14:06.939 }, 00:14:06.939 { 00:14:06.939 "name": "BaseBdev4", 00:14:06.939 "uuid": "37430b25-91df-5f75-b32b-5c7a9a2f8a55", 00:14:06.939 "is_configured": true, 00:14:06.939 "data_offset": 0, 00:14:06.939 "data_size": 65536 00:14:06.939 } 00:14:06.939 ] 00:14:06.939 }' 00:14:06.939 11:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.939 11:57:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.513 11:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:07.513 11:57:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.513 11:57:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.513 [2024-12-06 11:57:05.046498] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:07.513 [2024-12-06 11:57:05.049851] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:14:07.513 [2024-12-06 11:57:05.052073] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:07.513 11:57:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.513 11:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:08.481 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.481 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.481 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.481 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.481 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.481 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.481 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.481 11:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.481 11:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.481 11:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.481 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.481 "name": "raid_bdev1", 00:14:08.481 "uuid": "aac665e9-9fbb-4f1a-94d3-8ad06870f337", 00:14:08.481 "strip_size_kb": 64, 00:14:08.481 "state": "online", 00:14:08.481 "raid_level": "raid5f", 00:14:08.481 "superblock": false, 00:14:08.481 "num_base_bdevs": 4, 00:14:08.481 "num_base_bdevs_discovered": 4, 00:14:08.481 "num_base_bdevs_operational": 4, 00:14:08.481 "process": { 00:14:08.481 "type": "rebuild", 00:14:08.481 "target": "spare", 00:14:08.481 "progress": { 00:14:08.481 "blocks": 19200, 00:14:08.481 "percent": 9 00:14:08.481 } 00:14:08.481 }, 00:14:08.481 "base_bdevs_list": [ 00:14:08.481 { 00:14:08.481 "name": "spare", 00:14:08.481 "uuid": "a0156115-2297-5cd8-a047-dc5565ea91a6", 00:14:08.481 "is_configured": true, 00:14:08.481 "data_offset": 0, 00:14:08.481 "data_size": 65536 00:14:08.481 }, 00:14:08.481 { 00:14:08.481 "name": "BaseBdev2", 00:14:08.481 "uuid": "6efa45e2-63e5-50e0-892e-2b01f7970960", 00:14:08.481 "is_configured": true, 00:14:08.481 "data_offset": 0, 00:14:08.481 "data_size": 65536 00:14:08.481 }, 00:14:08.481 { 00:14:08.481 "name": "BaseBdev3", 00:14:08.481 "uuid": "4ac53d38-4281-5462-9bfd-19155a8fac00", 00:14:08.481 "is_configured": true, 00:14:08.481 "data_offset": 0, 00:14:08.481 "data_size": 65536 00:14:08.481 }, 00:14:08.481 { 00:14:08.481 "name": "BaseBdev4", 00:14:08.481 "uuid": "37430b25-91df-5f75-b32b-5c7a9a2f8a55", 00:14:08.481 "is_configured": true, 00:14:08.481 "data_offset": 0, 00:14:08.481 "data_size": 65536 00:14:08.481 } 00:14:08.481 ] 00:14:08.481 }' 00:14:08.481 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.481 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:08.481 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.481 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:08.481 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:08.481 11:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.482 11:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.482 [2024-12-06 11:57:06.218754] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:08.742 [2024-12-06 11:57:06.257290] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:08.742 [2024-12-06 11:57:06.257342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.742 [2024-12-06 11:57:06.257360] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:08.742 [2024-12-06 11:57:06.257368] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:08.742 11:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.742 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:08.742 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.742 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.742 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:08.742 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.742 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.742 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.742 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.742 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.742 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.742 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.742 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.742 11:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.742 11:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.742 11:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.742 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.742 "name": "raid_bdev1", 00:14:08.742 "uuid": "aac665e9-9fbb-4f1a-94d3-8ad06870f337", 00:14:08.742 "strip_size_kb": 64, 00:14:08.742 "state": "online", 00:14:08.742 "raid_level": "raid5f", 00:14:08.742 "superblock": false, 00:14:08.742 "num_base_bdevs": 4, 00:14:08.742 "num_base_bdevs_discovered": 3, 00:14:08.742 "num_base_bdevs_operational": 3, 00:14:08.742 "base_bdevs_list": [ 00:14:08.742 { 00:14:08.742 "name": null, 00:14:08.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.742 "is_configured": false, 00:14:08.742 "data_offset": 0, 00:14:08.742 "data_size": 65536 00:14:08.742 }, 00:14:08.742 { 00:14:08.742 "name": "BaseBdev2", 00:14:08.742 "uuid": "6efa45e2-63e5-50e0-892e-2b01f7970960", 00:14:08.742 "is_configured": true, 00:14:08.742 "data_offset": 0, 00:14:08.742 "data_size": 65536 00:14:08.742 }, 00:14:08.742 { 00:14:08.742 "name": "BaseBdev3", 00:14:08.742 "uuid": "4ac53d38-4281-5462-9bfd-19155a8fac00", 00:14:08.742 "is_configured": true, 00:14:08.742 "data_offset": 0, 00:14:08.742 "data_size": 65536 00:14:08.742 }, 00:14:08.742 { 00:14:08.742 "name": "BaseBdev4", 00:14:08.742 "uuid": "37430b25-91df-5f75-b32b-5c7a9a2f8a55", 00:14:08.742 "is_configured": true, 00:14:08.742 "data_offset": 0, 00:14:08.742 "data_size": 65536 00:14:08.742 } 00:14:08.742 ] 00:14:08.742 }' 00:14:08.742 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.742 11:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.003 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:09.003 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.003 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:09.003 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:09.003 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.003 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.003 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.003 11:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.003 11:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.003 11:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.263 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.263 "name": "raid_bdev1", 00:14:09.263 "uuid": "aac665e9-9fbb-4f1a-94d3-8ad06870f337", 00:14:09.263 "strip_size_kb": 64, 00:14:09.263 "state": "online", 00:14:09.263 "raid_level": "raid5f", 00:14:09.263 "superblock": false, 00:14:09.263 "num_base_bdevs": 4, 00:14:09.263 "num_base_bdevs_discovered": 3, 00:14:09.263 "num_base_bdevs_operational": 3, 00:14:09.263 "base_bdevs_list": [ 00:14:09.263 { 00:14:09.263 "name": null, 00:14:09.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.263 "is_configured": false, 00:14:09.263 "data_offset": 0, 00:14:09.263 "data_size": 65536 00:14:09.263 }, 00:14:09.263 { 00:14:09.263 "name": "BaseBdev2", 00:14:09.263 "uuid": "6efa45e2-63e5-50e0-892e-2b01f7970960", 00:14:09.263 "is_configured": true, 00:14:09.263 "data_offset": 0, 00:14:09.263 "data_size": 65536 00:14:09.263 }, 00:14:09.263 { 00:14:09.263 "name": "BaseBdev3", 00:14:09.263 "uuid": "4ac53d38-4281-5462-9bfd-19155a8fac00", 00:14:09.263 "is_configured": true, 00:14:09.263 "data_offset": 0, 00:14:09.263 "data_size": 65536 00:14:09.263 }, 00:14:09.263 { 00:14:09.263 "name": "BaseBdev4", 00:14:09.263 "uuid": "37430b25-91df-5f75-b32b-5c7a9a2f8a55", 00:14:09.263 "is_configured": true, 00:14:09.263 "data_offset": 0, 00:14:09.263 "data_size": 65536 00:14:09.263 } 00:14:09.263 ] 00:14:09.263 }' 00:14:09.263 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.263 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:09.263 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.263 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:09.263 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:09.263 11:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.263 11:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.264 [2024-12-06 11:57:06.865539] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:09.264 [2024-12-06 11:57:06.868443] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027e70 00:14:09.264 [2024-12-06 11:57:06.870696] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:09.264 11:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.264 11:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:10.204 11:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:10.204 11:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.204 11:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:10.204 11:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:10.204 11:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.204 11:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.204 11:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.204 11:57:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.204 11:57:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.204 11:57:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.204 11:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.204 "name": "raid_bdev1", 00:14:10.204 "uuid": "aac665e9-9fbb-4f1a-94d3-8ad06870f337", 00:14:10.204 "strip_size_kb": 64, 00:14:10.204 "state": "online", 00:14:10.204 "raid_level": "raid5f", 00:14:10.204 "superblock": false, 00:14:10.204 "num_base_bdevs": 4, 00:14:10.204 "num_base_bdevs_discovered": 4, 00:14:10.204 "num_base_bdevs_operational": 4, 00:14:10.204 "process": { 00:14:10.204 "type": "rebuild", 00:14:10.204 "target": "spare", 00:14:10.204 "progress": { 00:14:10.204 "blocks": 19200, 00:14:10.204 "percent": 9 00:14:10.204 } 00:14:10.204 }, 00:14:10.204 "base_bdevs_list": [ 00:14:10.204 { 00:14:10.204 "name": "spare", 00:14:10.204 "uuid": "a0156115-2297-5cd8-a047-dc5565ea91a6", 00:14:10.204 "is_configured": true, 00:14:10.204 "data_offset": 0, 00:14:10.204 "data_size": 65536 00:14:10.204 }, 00:14:10.204 { 00:14:10.204 "name": "BaseBdev2", 00:14:10.204 "uuid": "6efa45e2-63e5-50e0-892e-2b01f7970960", 00:14:10.204 "is_configured": true, 00:14:10.204 "data_offset": 0, 00:14:10.204 "data_size": 65536 00:14:10.204 }, 00:14:10.204 { 00:14:10.204 "name": "BaseBdev3", 00:14:10.204 "uuid": "4ac53d38-4281-5462-9bfd-19155a8fac00", 00:14:10.204 "is_configured": true, 00:14:10.204 "data_offset": 0, 00:14:10.204 "data_size": 65536 00:14:10.204 }, 00:14:10.204 { 00:14:10.204 "name": "BaseBdev4", 00:14:10.204 "uuid": "37430b25-91df-5f75-b32b-5c7a9a2f8a55", 00:14:10.204 "is_configured": true, 00:14:10.204 "data_offset": 0, 00:14:10.204 "data_size": 65536 00:14:10.204 } 00:14:10.204 ] 00:14:10.204 }' 00:14:10.204 11:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.464 11:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:10.464 11:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.464 11:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:10.464 11:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:10.464 11:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:10.464 11:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:10.464 11:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=502 00:14:10.464 11:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:10.464 11:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:10.464 11:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.464 11:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:10.464 11:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:10.464 11:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.464 11:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.464 11:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.464 11:57:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.464 11:57:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.464 11:57:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.464 11:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.464 "name": "raid_bdev1", 00:14:10.464 "uuid": "aac665e9-9fbb-4f1a-94d3-8ad06870f337", 00:14:10.464 "strip_size_kb": 64, 00:14:10.464 "state": "online", 00:14:10.464 "raid_level": "raid5f", 00:14:10.464 "superblock": false, 00:14:10.464 "num_base_bdevs": 4, 00:14:10.464 "num_base_bdevs_discovered": 4, 00:14:10.464 "num_base_bdevs_operational": 4, 00:14:10.464 "process": { 00:14:10.464 "type": "rebuild", 00:14:10.464 "target": "spare", 00:14:10.464 "progress": { 00:14:10.464 "blocks": 21120, 00:14:10.464 "percent": 10 00:14:10.464 } 00:14:10.464 }, 00:14:10.464 "base_bdevs_list": [ 00:14:10.464 { 00:14:10.464 "name": "spare", 00:14:10.464 "uuid": "a0156115-2297-5cd8-a047-dc5565ea91a6", 00:14:10.464 "is_configured": true, 00:14:10.464 "data_offset": 0, 00:14:10.464 "data_size": 65536 00:14:10.464 }, 00:14:10.464 { 00:14:10.464 "name": "BaseBdev2", 00:14:10.464 "uuid": "6efa45e2-63e5-50e0-892e-2b01f7970960", 00:14:10.464 "is_configured": true, 00:14:10.464 "data_offset": 0, 00:14:10.464 "data_size": 65536 00:14:10.464 }, 00:14:10.464 { 00:14:10.464 "name": "BaseBdev3", 00:14:10.464 "uuid": "4ac53d38-4281-5462-9bfd-19155a8fac00", 00:14:10.464 "is_configured": true, 00:14:10.464 "data_offset": 0, 00:14:10.464 "data_size": 65536 00:14:10.464 }, 00:14:10.464 { 00:14:10.464 "name": "BaseBdev4", 00:14:10.464 "uuid": "37430b25-91df-5f75-b32b-5c7a9a2f8a55", 00:14:10.464 "is_configured": true, 00:14:10.464 "data_offset": 0, 00:14:10.464 "data_size": 65536 00:14:10.464 } 00:14:10.464 ] 00:14:10.464 }' 00:14:10.464 11:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.465 11:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:10.465 11:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.465 11:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:10.465 11:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:11.404 11:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:11.404 11:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.404 11:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.404 11:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.404 11:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.404 11:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.404 11:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.404 11:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.404 11:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.404 11:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.664 11:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.664 11:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.664 "name": "raid_bdev1", 00:14:11.664 "uuid": "aac665e9-9fbb-4f1a-94d3-8ad06870f337", 00:14:11.664 "strip_size_kb": 64, 00:14:11.664 "state": "online", 00:14:11.664 "raid_level": "raid5f", 00:14:11.664 "superblock": false, 00:14:11.664 "num_base_bdevs": 4, 00:14:11.664 "num_base_bdevs_discovered": 4, 00:14:11.664 "num_base_bdevs_operational": 4, 00:14:11.665 "process": { 00:14:11.665 "type": "rebuild", 00:14:11.665 "target": "spare", 00:14:11.665 "progress": { 00:14:11.665 "blocks": 42240, 00:14:11.665 "percent": 21 00:14:11.665 } 00:14:11.665 }, 00:14:11.665 "base_bdevs_list": [ 00:14:11.665 { 00:14:11.665 "name": "spare", 00:14:11.665 "uuid": "a0156115-2297-5cd8-a047-dc5565ea91a6", 00:14:11.665 "is_configured": true, 00:14:11.665 "data_offset": 0, 00:14:11.665 "data_size": 65536 00:14:11.665 }, 00:14:11.665 { 00:14:11.665 "name": "BaseBdev2", 00:14:11.665 "uuid": "6efa45e2-63e5-50e0-892e-2b01f7970960", 00:14:11.665 "is_configured": true, 00:14:11.665 "data_offset": 0, 00:14:11.665 "data_size": 65536 00:14:11.665 }, 00:14:11.665 { 00:14:11.665 "name": "BaseBdev3", 00:14:11.665 "uuid": "4ac53d38-4281-5462-9bfd-19155a8fac00", 00:14:11.665 "is_configured": true, 00:14:11.665 "data_offset": 0, 00:14:11.665 "data_size": 65536 00:14:11.665 }, 00:14:11.665 { 00:14:11.665 "name": "BaseBdev4", 00:14:11.665 "uuid": "37430b25-91df-5f75-b32b-5c7a9a2f8a55", 00:14:11.665 "is_configured": true, 00:14:11.665 "data_offset": 0, 00:14:11.665 "data_size": 65536 00:14:11.665 } 00:14:11.665 ] 00:14:11.665 }' 00:14:11.665 11:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.665 11:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:11.665 11:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.665 11:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:11.665 11:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:12.604 11:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:12.604 11:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.604 11:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.604 11:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.604 11:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.604 11:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.604 11:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.604 11:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.604 11:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.604 11:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.604 11:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.604 11:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.604 "name": "raid_bdev1", 00:14:12.604 "uuid": "aac665e9-9fbb-4f1a-94d3-8ad06870f337", 00:14:12.604 "strip_size_kb": 64, 00:14:12.604 "state": "online", 00:14:12.604 "raid_level": "raid5f", 00:14:12.604 "superblock": false, 00:14:12.604 "num_base_bdevs": 4, 00:14:12.604 "num_base_bdevs_discovered": 4, 00:14:12.604 "num_base_bdevs_operational": 4, 00:14:12.604 "process": { 00:14:12.604 "type": "rebuild", 00:14:12.604 "target": "spare", 00:14:12.604 "progress": { 00:14:12.604 "blocks": 65280, 00:14:12.604 "percent": 33 00:14:12.604 } 00:14:12.604 }, 00:14:12.604 "base_bdevs_list": [ 00:14:12.604 { 00:14:12.604 "name": "spare", 00:14:12.604 "uuid": "a0156115-2297-5cd8-a047-dc5565ea91a6", 00:14:12.604 "is_configured": true, 00:14:12.604 "data_offset": 0, 00:14:12.604 "data_size": 65536 00:14:12.604 }, 00:14:12.604 { 00:14:12.604 "name": "BaseBdev2", 00:14:12.604 "uuid": "6efa45e2-63e5-50e0-892e-2b01f7970960", 00:14:12.604 "is_configured": true, 00:14:12.604 "data_offset": 0, 00:14:12.604 "data_size": 65536 00:14:12.604 }, 00:14:12.604 { 00:14:12.604 "name": "BaseBdev3", 00:14:12.604 "uuid": "4ac53d38-4281-5462-9bfd-19155a8fac00", 00:14:12.604 "is_configured": true, 00:14:12.604 "data_offset": 0, 00:14:12.604 "data_size": 65536 00:14:12.604 }, 00:14:12.604 { 00:14:12.604 "name": "BaseBdev4", 00:14:12.604 "uuid": "37430b25-91df-5f75-b32b-5c7a9a2f8a55", 00:14:12.604 "is_configured": true, 00:14:12.604 "data_offset": 0, 00:14:12.604 "data_size": 65536 00:14:12.604 } 00:14:12.604 ] 00:14:12.604 }' 00:14:12.604 11:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.864 11:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.864 11:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.864 11:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.864 11:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:13.804 11:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:13.804 11:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:13.804 11:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.804 11:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:13.804 11:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:13.804 11:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.804 11:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.804 11:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.804 11:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.804 11:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.804 11:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.804 11:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.804 "name": "raid_bdev1", 00:14:13.804 "uuid": "aac665e9-9fbb-4f1a-94d3-8ad06870f337", 00:14:13.804 "strip_size_kb": 64, 00:14:13.804 "state": "online", 00:14:13.804 "raid_level": "raid5f", 00:14:13.804 "superblock": false, 00:14:13.804 "num_base_bdevs": 4, 00:14:13.804 "num_base_bdevs_discovered": 4, 00:14:13.804 "num_base_bdevs_operational": 4, 00:14:13.804 "process": { 00:14:13.804 "type": "rebuild", 00:14:13.805 "target": "spare", 00:14:13.805 "progress": { 00:14:13.805 "blocks": 86400, 00:14:13.805 "percent": 43 00:14:13.805 } 00:14:13.805 }, 00:14:13.805 "base_bdevs_list": [ 00:14:13.805 { 00:14:13.805 "name": "spare", 00:14:13.805 "uuid": "a0156115-2297-5cd8-a047-dc5565ea91a6", 00:14:13.805 "is_configured": true, 00:14:13.805 "data_offset": 0, 00:14:13.805 "data_size": 65536 00:14:13.805 }, 00:14:13.805 { 00:14:13.805 "name": "BaseBdev2", 00:14:13.805 "uuid": "6efa45e2-63e5-50e0-892e-2b01f7970960", 00:14:13.805 "is_configured": true, 00:14:13.805 "data_offset": 0, 00:14:13.805 "data_size": 65536 00:14:13.805 }, 00:14:13.805 { 00:14:13.805 "name": "BaseBdev3", 00:14:13.805 "uuid": "4ac53d38-4281-5462-9bfd-19155a8fac00", 00:14:13.805 "is_configured": true, 00:14:13.805 "data_offset": 0, 00:14:13.805 "data_size": 65536 00:14:13.805 }, 00:14:13.805 { 00:14:13.805 "name": "BaseBdev4", 00:14:13.805 "uuid": "37430b25-91df-5f75-b32b-5c7a9a2f8a55", 00:14:13.805 "is_configured": true, 00:14:13.805 "data_offset": 0, 00:14:13.805 "data_size": 65536 00:14:13.805 } 00:14:13.805 ] 00:14:13.805 }' 00:14:13.805 11:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.805 11:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:13.805 11:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.065 11:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:14.065 11:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:15.006 11:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:15.006 11:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.006 11:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.006 11:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:15.006 11:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:15.006 11:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.006 11:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.006 11:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.006 11:57:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.006 11:57:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.006 11:57:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.006 11:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.006 "name": "raid_bdev1", 00:14:15.006 "uuid": "aac665e9-9fbb-4f1a-94d3-8ad06870f337", 00:14:15.006 "strip_size_kb": 64, 00:14:15.006 "state": "online", 00:14:15.006 "raid_level": "raid5f", 00:14:15.006 "superblock": false, 00:14:15.006 "num_base_bdevs": 4, 00:14:15.006 "num_base_bdevs_discovered": 4, 00:14:15.006 "num_base_bdevs_operational": 4, 00:14:15.006 "process": { 00:14:15.006 "type": "rebuild", 00:14:15.006 "target": "spare", 00:14:15.006 "progress": { 00:14:15.006 "blocks": 109440, 00:14:15.006 "percent": 55 00:14:15.006 } 00:14:15.006 }, 00:14:15.006 "base_bdevs_list": [ 00:14:15.006 { 00:14:15.006 "name": "spare", 00:14:15.006 "uuid": "a0156115-2297-5cd8-a047-dc5565ea91a6", 00:14:15.006 "is_configured": true, 00:14:15.006 "data_offset": 0, 00:14:15.006 "data_size": 65536 00:14:15.006 }, 00:14:15.006 { 00:14:15.006 "name": "BaseBdev2", 00:14:15.006 "uuid": "6efa45e2-63e5-50e0-892e-2b01f7970960", 00:14:15.006 "is_configured": true, 00:14:15.006 "data_offset": 0, 00:14:15.006 "data_size": 65536 00:14:15.006 }, 00:14:15.006 { 00:14:15.006 "name": "BaseBdev3", 00:14:15.006 "uuid": "4ac53d38-4281-5462-9bfd-19155a8fac00", 00:14:15.006 "is_configured": true, 00:14:15.006 "data_offset": 0, 00:14:15.006 "data_size": 65536 00:14:15.006 }, 00:14:15.006 { 00:14:15.006 "name": "BaseBdev4", 00:14:15.006 "uuid": "37430b25-91df-5f75-b32b-5c7a9a2f8a55", 00:14:15.006 "is_configured": true, 00:14:15.006 "data_offset": 0, 00:14:15.006 "data_size": 65536 00:14:15.006 } 00:14:15.006 ] 00:14:15.006 }' 00:14:15.006 11:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.006 11:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:15.006 11:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.006 11:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:15.006 11:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:16.389 11:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:16.389 11:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.389 11:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.389 11:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.389 11:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.389 11:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.389 11:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.389 11:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.389 11:57:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.389 11:57:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.389 11:57:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.389 11:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.389 "name": "raid_bdev1", 00:14:16.389 "uuid": "aac665e9-9fbb-4f1a-94d3-8ad06870f337", 00:14:16.389 "strip_size_kb": 64, 00:14:16.389 "state": "online", 00:14:16.389 "raid_level": "raid5f", 00:14:16.389 "superblock": false, 00:14:16.389 "num_base_bdevs": 4, 00:14:16.389 "num_base_bdevs_discovered": 4, 00:14:16.389 "num_base_bdevs_operational": 4, 00:14:16.389 "process": { 00:14:16.389 "type": "rebuild", 00:14:16.389 "target": "spare", 00:14:16.389 "progress": { 00:14:16.389 "blocks": 130560, 00:14:16.389 "percent": 66 00:14:16.389 } 00:14:16.389 }, 00:14:16.389 "base_bdevs_list": [ 00:14:16.389 { 00:14:16.389 "name": "spare", 00:14:16.389 "uuid": "a0156115-2297-5cd8-a047-dc5565ea91a6", 00:14:16.389 "is_configured": true, 00:14:16.389 "data_offset": 0, 00:14:16.389 "data_size": 65536 00:14:16.389 }, 00:14:16.389 { 00:14:16.389 "name": "BaseBdev2", 00:14:16.389 "uuid": "6efa45e2-63e5-50e0-892e-2b01f7970960", 00:14:16.389 "is_configured": true, 00:14:16.389 "data_offset": 0, 00:14:16.389 "data_size": 65536 00:14:16.389 }, 00:14:16.389 { 00:14:16.389 "name": "BaseBdev3", 00:14:16.389 "uuid": "4ac53d38-4281-5462-9bfd-19155a8fac00", 00:14:16.389 "is_configured": true, 00:14:16.389 "data_offset": 0, 00:14:16.389 "data_size": 65536 00:14:16.389 }, 00:14:16.389 { 00:14:16.389 "name": "BaseBdev4", 00:14:16.389 "uuid": "37430b25-91df-5f75-b32b-5c7a9a2f8a55", 00:14:16.389 "is_configured": true, 00:14:16.389 "data_offset": 0, 00:14:16.389 "data_size": 65536 00:14:16.389 } 00:14:16.389 ] 00:14:16.389 }' 00:14:16.389 11:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.389 11:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.389 11:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.389 11:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.389 11:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:17.341 11:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:17.341 11:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.341 11:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.341 11:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.341 11:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.341 11:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.341 11:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.341 11:57:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.341 11:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.341 11:57:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.341 11:57:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.341 11:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.341 "name": "raid_bdev1", 00:14:17.341 "uuid": "aac665e9-9fbb-4f1a-94d3-8ad06870f337", 00:14:17.341 "strip_size_kb": 64, 00:14:17.341 "state": "online", 00:14:17.341 "raid_level": "raid5f", 00:14:17.341 "superblock": false, 00:14:17.341 "num_base_bdevs": 4, 00:14:17.341 "num_base_bdevs_discovered": 4, 00:14:17.341 "num_base_bdevs_operational": 4, 00:14:17.341 "process": { 00:14:17.341 "type": "rebuild", 00:14:17.341 "target": "spare", 00:14:17.341 "progress": { 00:14:17.341 "blocks": 153600, 00:14:17.341 "percent": 78 00:14:17.341 } 00:14:17.341 }, 00:14:17.341 "base_bdevs_list": [ 00:14:17.341 { 00:14:17.341 "name": "spare", 00:14:17.341 "uuid": "a0156115-2297-5cd8-a047-dc5565ea91a6", 00:14:17.341 "is_configured": true, 00:14:17.341 "data_offset": 0, 00:14:17.341 "data_size": 65536 00:14:17.341 }, 00:14:17.341 { 00:14:17.341 "name": "BaseBdev2", 00:14:17.341 "uuid": "6efa45e2-63e5-50e0-892e-2b01f7970960", 00:14:17.341 "is_configured": true, 00:14:17.341 "data_offset": 0, 00:14:17.341 "data_size": 65536 00:14:17.341 }, 00:14:17.341 { 00:14:17.341 "name": "BaseBdev3", 00:14:17.341 "uuid": "4ac53d38-4281-5462-9bfd-19155a8fac00", 00:14:17.341 "is_configured": true, 00:14:17.341 "data_offset": 0, 00:14:17.341 "data_size": 65536 00:14:17.341 }, 00:14:17.341 { 00:14:17.341 "name": "BaseBdev4", 00:14:17.341 "uuid": "37430b25-91df-5f75-b32b-5c7a9a2f8a55", 00:14:17.341 "is_configured": true, 00:14:17.341 "data_offset": 0, 00:14:17.341 "data_size": 65536 00:14:17.341 } 00:14:17.342 ] 00:14:17.342 }' 00:14:17.342 11:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.342 11:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.342 11:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.342 11:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.342 11:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:18.335 11:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:18.335 11:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.335 11:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.335 11:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.335 11:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.335 11:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.335 11:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.335 11:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.335 11:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.335 11:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.335 11:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.335 11:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.335 "name": "raid_bdev1", 00:14:18.335 "uuid": "aac665e9-9fbb-4f1a-94d3-8ad06870f337", 00:14:18.335 "strip_size_kb": 64, 00:14:18.335 "state": "online", 00:14:18.336 "raid_level": "raid5f", 00:14:18.336 "superblock": false, 00:14:18.336 "num_base_bdevs": 4, 00:14:18.336 "num_base_bdevs_discovered": 4, 00:14:18.336 "num_base_bdevs_operational": 4, 00:14:18.336 "process": { 00:14:18.336 "type": "rebuild", 00:14:18.336 "target": "spare", 00:14:18.336 "progress": { 00:14:18.336 "blocks": 174720, 00:14:18.336 "percent": 88 00:14:18.336 } 00:14:18.336 }, 00:14:18.336 "base_bdevs_list": [ 00:14:18.336 { 00:14:18.336 "name": "spare", 00:14:18.336 "uuid": "a0156115-2297-5cd8-a047-dc5565ea91a6", 00:14:18.336 "is_configured": true, 00:14:18.336 "data_offset": 0, 00:14:18.336 "data_size": 65536 00:14:18.336 }, 00:14:18.336 { 00:14:18.336 "name": "BaseBdev2", 00:14:18.336 "uuid": "6efa45e2-63e5-50e0-892e-2b01f7970960", 00:14:18.336 "is_configured": true, 00:14:18.336 "data_offset": 0, 00:14:18.336 "data_size": 65536 00:14:18.336 }, 00:14:18.336 { 00:14:18.336 "name": "BaseBdev3", 00:14:18.336 "uuid": "4ac53d38-4281-5462-9bfd-19155a8fac00", 00:14:18.336 "is_configured": true, 00:14:18.336 "data_offset": 0, 00:14:18.336 "data_size": 65536 00:14:18.336 }, 00:14:18.336 { 00:14:18.336 "name": "BaseBdev4", 00:14:18.336 "uuid": "37430b25-91df-5f75-b32b-5c7a9a2f8a55", 00:14:18.336 "is_configured": true, 00:14:18.336 "data_offset": 0, 00:14:18.336 "data_size": 65536 00:14:18.336 } 00:14:18.336 ] 00:14:18.336 }' 00:14:18.613 11:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.613 11:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.613 11:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.613 11:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.613 11:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:19.553 11:57:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:19.553 11:57:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.553 11:57:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.553 11:57:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.553 11:57:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.553 11:57:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.553 11:57:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.553 11:57:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.553 11:57:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.553 11:57:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.553 11:57:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.553 11:57:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.553 "name": "raid_bdev1", 00:14:19.553 "uuid": "aac665e9-9fbb-4f1a-94d3-8ad06870f337", 00:14:19.553 "strip_size_kb": 64, 00:14:19.553 "state": "online", 00:14:19.553 "raid_level": "raid5f", 00:14:19.553 "superblock": false, 00:14:19.553 "num_base_bdevs": 4, 00:14:19.553 "num_base_bdevs_discovered": 4, 00:14:19.553 "num_base_bdevs_operational": 4, 00:14:19.553 "process": { 00:14:19.553 "type": "rebuild", 00:14:19.553 "target": "spare", 00:14:19.553 "progress": { 00:14:19.553 "blocks": 195840, 00:14:19.553 "percent": 99 00:14:19.553 } 00:14:19.553 }, 00:14:19.553 "base_bdevs_list": [ 00:14:19.553 { 00:14:19.553 "name": "spare", 00:14:19.553 "uuid": "a0156115-2297-5cd8-a047-dc5565ea91a6", 00:14:19.553 "is_configured": true, 00:14:19.553 "data_offset": 0, 00:14:19.553 "data_size": 65536 00:14:19.553 }, 00:14:19.553 { 00:14:19.553 "name": "BaseBdev2", 00:14:19.553 "uuid": "6efa45e2-63e5-50e0-892e-2b01f7970960", 00:14:19.553 "is_configured": true, 00:14:19.553 "data_offset": 0, 00:14:19.553 "data_size": 65536 00:14:19.553 }, 00:14:19.553 { 00:14:19.553 "name": "BaseBdev3", 00:14:19.553 "uuid": "4ac53d38-4281-5462-9bfd-19155a8fac00", 00:14:19.553 "is_configured": true, 00:14:19.553 "data_offset": 0, 00:14:19.553 "data_size": 65536 00:14:19.553 }, 00:14:19.553 { 00:14:19.553 "name": "BaseBdev4", 00:14:19.553 "uuid": "37430b25-91df-5f75-b32b-5c7a9a2f8a55", 00:14:19.553 "is_configured": true, 00:14:19.553 "data_offset": 0, 00:14:19.553 "data_size": 65536 00:14:19.553 } 00:14:19.553 ] 00:14:19.553 }' 00:14:19.553 11:57:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.553 [2024-12-06 11:57:17.209411] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:19.553 [2024-12-06 11:57:17.209550] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:19.553 [2024-12-06 11:57:17.209609] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.553 11:57:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.553 11:57:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.553 11:57:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.553 11:57:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:20.935 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:20.935 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.935 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.935 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.935 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.935 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.935 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.935 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.935 11:57:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.935 11:57:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.935 11:57:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.935 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.935 "name": "raid_bdev1", 00:14:20.935 "uuid": "aac665e9-9fbb-4f1a-94d3-8ad06870f337", 00:14:20.935 "strip_size_kb": 64, 00:14:20.935 "state": "online", 00:14:20.935 "raid_level": "raid5f", 00:14:20.935 "superblock": false, 00:14:20.935 "num_base_bdevs": 4, 00:14:20.935 "num_base_bdevs_discovered": 4, 00:14:20.935 "num_base_bdevs_operational": 4, 00:14:20.935 "base_bdevs_list": [ 00:14:20.935 { 00:14:20.935 "name": "spare", 00:14:20.935 "uuid": "a0156115-2297-5cd8-a047-dc5565ea91a6", 00:14:20.935 "is_configured": true, 00:14:20.935 "data_offset": 0, 00:14:20.935 "data_size": 65536 00:14:20.935 }, 00:14:20.935 { 00:14:20.935 "name": "BaseBdev2", 00:14:20.935 "uuid": "6efa45e2-63e5-50e0-892e-2b01f7970960", 00:14:20.935 "is_configured": true, 00:14:20.935 "data_offset": 0, 00:14:20.935 "data_size": 65536 00:14:20.935 }, 00:14:20.935 { 00:14:20.935 "name": "BaseBdev3", 00:14:20.935 "uuid": "4ac53d38-4281-5462-9bfd-19155a8fac00", 00:14:20.935 "is_configured": true, 00:14:20.935 "data_offset": 0, 00:14:20.935 "data_size": 65536 00:14:20.935 }, 00:14:20.935 { 00:14:20.935 "name": "BaseBdev4", 00:14:20.935 "uuid": "37430b25-91df-5f75-b32b-5c7a9a2f8a55", 00:14:20.935 "is_configured": true, 00:14:20.935 "data_offset": 0, 00:14:20.935 "data_size": 65536 00:14:20.935 } 00:14:20.935 ] 00:14:20.935 }' 00:14:20.935 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.935 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:20.935 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.935 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:20.935 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:20.935 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:20.935 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.935 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:20.935 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:20.935 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.935 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.935 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.935 11:57:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.935 11:57:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.935 11:57:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.935 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.935 "name": "raid_bdev1", 00:14:20.935 "uuid": "aac665e9-9fbb-4f1a-94d3-8ad06870f337", 00:14:20.935 "strip_size_kb": 64, 00:14:20.935 "state": "online", 00:14:20.936 "raid_level": "raid5f", 00:14:20.936 "superblock": false, 00:14:20.936 "num_base_bdevs": 4, 00:14:20.936 "num_base_bdevs_discovered": 4, 00:14:20.936 "num_base_bdevs_operational": 4, 00:14:20.936 "base_bdevs_list": [ 00:14:20.936 { 00:14:20.936 "name": "spare", 00:14:20.936 "uuid": "a0156115-2297-5cd8-a047-dc5565ea91a6", 00:14:20.936 "is_configured": true, 00:14:20.936 "data_offset": 0, 00:14:20.936 "data_size": 65536 00:14:20.936 }, 00:14:20.936 { 00:14:20.936 "name": "BaseBdev2", 00:14:20.936 "uuid": "6efa45e2-63e5-50e0-892e-2b01f7970960", 00:14:20.936 "is_configured": true, 00:14:20.936 "data_offset": 0, 00:14:20.936 "data_size": 65536 00:14:20.936 }, 00:14:20.936 { 00:14:20.936 "name": "BaseBdev3", 00:14:20.936 "uuid": "4ac53d38-4281-5462-9bfd-19155a8fac00", 00:14:20.936 "is_configured": true, 00:14:20.936 "data_offset": 0, 00:14:20.936 "data_size": 65536 00:14:20.936 }, 00:14:20.936 { 00:14:20.936 "name": "BaseBdev4", 00:14:20.936 "uuid": "37430b25-91df-5f75-b32b-5c7a9a2f8a55", 00:14:20.936 "is_configured": true, 00:14:20.936 "data_offset": 0, 00:14:20.936 "data_size": 65536 00:14:20.936 } 00:14:20.936 ] 00:14:20.936 }' 00:14:20.936 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.936 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:20.936 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.936 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:20.936 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:20.936 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.936 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.936 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:20.936 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.936 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:20.936 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.936 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.936 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.936 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.936 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.936 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.936 11:57:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.936 11:57:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.936 11:57:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.936 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.936 "name": "raid_bdev1", 00:14:20.936 "uuid": "aac665e9-9fbb-4f1a-94d3-8ad06870f337", 00:14:20.936 "strip_size_kb": 64, 00:14:20.936 "state": "online", 00:14:20.936 "raid_level": "raid5f", 00:14:20.936 "superblock": false, 00:14:20.936 "num_base_bdevs": 4, 00:14:20.936 "num_base_bdevs_discovered": 4, 00:14:20.936 "num_base_bdevs_operational": 4, 00:14:20.936 "base_bdevs_list": [ 00:14:20.936 { 00:14:20.936 "name": "spare", 00:14:20.936 "uuid": "a0156115-2297-5cd8-a047-dc5565ea91a6", 00:14:20.936 "is_configured": true, 00:14:20.936 "data_offset": 0, 00:14:20.936 "data_size": 65536 00:14:20.936 }, 00:14:20.936 { 00:14:20.936 "name": "BaseBdev2", 00:14:20.936 "uuid": "6efa45e2-63e5-50e0-892e-2b01f7970960", 00:14:20.936 "is_configured": true, 00:14:20.936 "data_offset": 0, 00:14:20.936 "data_size": 65536 00:14:20.936 }, 00:14:20.936 { 00:14:20.936 "name": "BaseBdev3", 00:14:20.936 "uuid": "4ac53d38-4281-5462-9bfd-19155a8fac00", 00:14:20.936 "is_configured": true, 00:14:20.936 "data_offset": 0, 00:14:20.936 "data_size": 65536 00:14:20.936 }, 00:14:20.936 { 00:14:20.936 "name": "BaseBdev4", 00:14:20.936 "uuid": "37430b25-91df-5f75-b32b-5c7a9a2f8a55", 00:14:20.936 "is_configured": true, 00:14:20.936 "data_offset": 0, 00:14:20.936 "data_size": 65536 00:14:20.936 } 00:14:20.936 ] 00:14:20.936 }' 00:14:20.936 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.936 11:57:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.196 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:21.196 11:57:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.196 11:57:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.196 [2024-12-06 11:57:18.939713] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:21.196 [2024-12-06 11:57:18.939779] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:21.196 [2024-12-06 11:57:18.939874] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:21.196 [2024-12-06 11:57:18.939990] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:21.196 [2024-12-06 11:57:18.940026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:21.196 11:57:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.196 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.196 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:21.196 11:57:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.196 11:57:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.456 11:57:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.456 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:21.456 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:21.456 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:21.456 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:21.456 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:21.456 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:21.456 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:21.456 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:21.456 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:21.456 11:57:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:21.456 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:21.456 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:21.456 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:21.456 /dev/nbd0 00:14:21.717 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:21.717 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:21.717 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:21.717 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:21.717 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:21.717 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:21.717 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:21.717 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:21.717 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:21.717 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:21.717 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:21.717 1+0 records in 00:14:21.717 1+0 records out 00:14:21.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000542995 s, 7.5 MB/s 00:14:21.717 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.717 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:21.717 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.717 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:21.717 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:21.717 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:21.717 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:21.717 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:21.717 /dev/nbd1 00:14:21.977 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:21.977 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:21.977 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:21.977 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:21.977 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:21.977 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:21.977 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:21.977 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:21.977 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:21.977 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:21.977 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:21.977 1+0 records in 00:14:21.977 1+0 records out 00:14:21.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268773 s, 15.2 MB/s 00:14:21.977 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.977 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:21.977 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.977 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:21.977 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:21.977 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:21.977 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:21.977 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:21.977 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:21.978 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:21.978 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:21.978 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:21.978 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:21.978 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:21.978 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:22.238 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:22.238 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:22.238 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:22.238 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:22.238 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:22.238 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:22.238 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:22.238 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:22.238 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:22.238 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:22.238 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:22.238 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:22.238 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:22.238 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:22.238 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:22.238 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:22.238 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:22.238 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:22.238 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:22.238 11:57:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 94611 00:14:22.238 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 94611 ']' 00:14:22.238 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 94611 00:14:22.498 11:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:14:22.498 11:57:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:22.498 11:57:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94611 00:14:22.498 killing process with pid 94611 00:14:22.498 Received shutdown signal, test time was about 60.000000 seconds 00:14:22.498 00:14:22.498 Latency(us) 00:14:22.498 [2024-12-06T11:57:20.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.498 [2024-12-06T11:57:20.256Z] =================================================================================================================== 00:14:22.498 [2024-12-06T11:57:20.256Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:22.498 11:57:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:22.498 11:57:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:22.498 11:57:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94611' 00:14:22.498 11:57:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 94611 00:14:22.498 [2024-12-06 11:57:20.039269] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:22.498 11:57:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 94611 00:14:22.498 [2024-12-06 11:57:20.090405] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:22.759 00:14:22.759 real 0m18.149s 00:14:22.759 user 0m21.809s 00:14:22.759 sys 0m2.229s 00:14:22.759 ************************************ 00:14:22.759 END TEST raid5f_rebuild_test 00:14:22.759 ************************************ 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.759 11:57:20 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:14:22.759 11:57:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:22.759 11:57:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:22.759 11:57:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:22.759 ************************************ 00:14:22.759 START TEST raid5f_rebuild_test_sb 00:14:22.759 ************************************ 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=95111 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 95111 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 95111 ']' 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:22.759 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.760 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:22.760 11:57:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.760 [2024-12-06 11:57:20.490156] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:22.760 [2024-12-06 11:57:20.490426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:22.760 Zero copy mechanism will not be used. 00:14:22.760 -allocations --file-prefix=spdk_pid95111 ] 00:14:23.020 [2024-12-06 11:57:20.633171] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.020 [2024-12-06 11:57:20.676489] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.020 [2024-12-06 11:57:20.718553] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:23.020 [2024-12-06 11:57:20.718660] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:23.590 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:23.590 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:23.590 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:23.590 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:23.590 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.590 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.590 BaseBdev1_malloc 00:14:23.590 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.590 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:23.590 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.590 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.590 [2024-12-06 11:57:21.328543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:23.590 [2024-12-06 11:57:21.328657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.590 [2024-12-06 11:57:21.328696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:23.590 [2024-12-06 11:57:21.328729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.590 [2024-12-06 11:57:21.330792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.590 [2024-12-06 11:57:21.330828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:23.590 BaseBdev1 00:14:23.590 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.590 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:23.590 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:23.590 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.590 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.851 BaseBdev2_malloc 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.851 [2024-12-06 11:57:21.370074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:23.851 [2024-12-06 11:57:21.370177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.851 [2024-12-06 11:57:21.370223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:23.851 [2024-12-06 11:57:21.370279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.851 [2024-12-06 11:57:21.374815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.851 [2024-12-06 11:57:21.374867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:23.851 BaseBdev2 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.851 BaseBdev3_malloc 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.851 [2024-12-06 11:57:21.400820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:23.851 [2024-12-06 11:57:21.400931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.851 [2024-12-06 11:57:21.400976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:23.851 [2024-12-06 11:57:21.401006] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.851 [2024-12-06 11:57:21.403067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.851 [2024-12-06 11:57:21.403134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:23.851 BaseBdev3 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.851 BaseBdev4_malloc 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.851 [2024-12-06 11:57:21.429404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:23.851 [2024-12-06 11:57:21.429446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.851 [2024-12-06 11:57:21.429480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:23.851 [2024-12-06 11:57:21.429488] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.851 [2024-12-06 11:57:21.431487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.851 [2024-12-06 11:57:21.431519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:23.851 BaseBdev4 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.851 spare_malloc 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.851 spare_delay 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.851 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.851 [2024-12-06 11:57:21.470003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:23.851 [2024-12-06 11:57:21.470046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.851 [2024-12-06 11:57:21.470079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:23.851 [2024-12-06 11:57:21.470087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.851 [2024-12-06 11:57:21.472245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.851 [2024-12-06 11:57:21.472282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:23.851 spare 00:14:23.852 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.852 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:23.852 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.852 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.852 [2024-12-06 11:57:21.482052] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:23.852 [2024-12-06 11:57:21.483907] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:23.852 [2024-12-06 11:57:21.484032] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:23.852 [2024-12-06 11:57:21.484096] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:23.852 [2024-12-06 11:57:21.484304] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:23.852 [2024-12-06 11:57:21.484317] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:23.852 [2024-12-06 11:57:21.484604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:14:23.852 [2024-12-06 11:57:21.485036] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:23.852 [2024-12-06 11:57:21.485048] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:23.852 [2024-12-06 11:57:21.485177] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.852 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.852 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:23.852 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.852 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.852 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.852 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.852 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:23.852 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.852 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.852 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.852 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.852 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.852 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.852 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.852 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.852 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.852 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.852 "name": "raid_bdev1", 00:14:23.852 "uuid": "f04ce6a0-2a07-4b86-85d1-6cdbd56cf492", 00:14:23.852 "strip_size_kb": 64, 00:14:23.852 "state": "online", 00:14:23.852 "raid_level": "raid5f", 00:14:23.852 "superblock": true, 00:14:23.852 "num_base_bdevs": 4, 00:14:23.852 "num_base_bdevs_discovered": 4, 00:14:23.852 "num_base_bdevs_operational": 4, 00:14:23.852 "base_bdevs_list": [ 00:14:23.852 { 00:14:23.852 "name": "BaseBdev1", 00:14:23.852 "uuid": "f2ac3698-ea0f-5749-b1f9-19e656fc66f9", 00:14:23.852 "is_configured": true, 00:14:23.852 "data_offset": 2048, 00:14:23.852 "data_size": 63488 00:14:23.852 }, 00:14:23.852 { 00:14:23.852 "name": "BaseBdev2", 00:14:23.852 "uuid": "7f6fcfe9-910b-5a1f-b0c9-e2b1a5387d64", 00:14:23.852 "is_configured": true, 00:14:23.852 "data_offset": 2048, 00:14:23.852 "data_size": 63488 00:14:23.852 }, 00:14:23.852 { 00:14:23.852 "name": "BaseBdev3", 00:14:23.852 "uuid": "8cca78a1-5a77-5be1-9ebf-8923e22085b5", 00:14:23.852 "is_configured": true, 00:14:23.852 "data_offset": 2048, 00:14:23.852 "data_size": 63488 00:14:23.852 }, 00:14:23.852 { 00:14:23.852 "name": "BaseBdev4", 00:14:23.852 "uuid": "4e19d018-a63e-5077-b1a0-315214b1f3d6", 00:14:23.852 "is_configured": true, 00:14:23.852 "data_offset": 2048, 00:14:23.852 "data_size": 63488 00:14:23.852 } 00:14:23.852 ] 00:14:23.852 }' 00:14:23.852 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.852 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.422 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:24.422 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.422 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.422 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:24.422 [2024-12-06 11:57:21.934373] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:24.422 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.422 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:14:24.422 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.422 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:24.422 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.422 11:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.422 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.422 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:24.422 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:24.422 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:24.422 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:24.422 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:24.422 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:24.422 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:24.422 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:24.422 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:24.422 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:24.422 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:24.422 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:24.422 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:24.422 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:24.683 [2024-12-06 11:57:22.213760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:14:24.683 /dev/nbd0 00:14:24.683 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:24.683 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:24.683 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:24.683 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:24.683 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:24.683 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:24.683 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:24.683 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:24.683 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:24.683 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:24.683 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:24.683 1+0 records in 00:14:24.683 1+0 records out 00:14:24.683 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392999 s, 10.4 MB/s 00:14:24.683 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.683 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:24.683 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.683 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:24.683 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:24.683 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:24.683 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:24.683 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:24.683 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:14:24.683 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:14:24.683 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:14:24.943 496+0 records in 00:14:24.943 496+0 records out 00:14:24.943 97517568 bytes (98 MB, 93 MiB) copied, 0.381813 s, 255 MB/s 00:14:24.943 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:24.943 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:24.943 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:24.943 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:24.943 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:24.943 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:24.943 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:25.204 [2024-12-06 11:57:22.890269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.204 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:25.204 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:25.204 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:25.204 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.204 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.204 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:25.204 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:25.204 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.204 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:25.204 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.204 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.204 [2024-12-06 11:57:22.922278] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:25.204 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.204 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:25.204 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.204 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.204 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.204 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.204 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.204 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.204 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.204 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.204 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.204 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.204 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.204 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.204 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.204 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.463 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.463 "name": "raid_bdev1", 00:14:25.463 "uuid": "f04ce6a0-2a07-4b86-85d1-6cdbd56cf492", 00:14:25.463 "strip_size_kb": 64, 00:14:25.463 "state": "online", 00:14:25.463 "raid_level": "raid5f", 00:14:25.463 "superblock": true, 00:14:25.463 "num_base_bdevs": 4, 00:14:25.463 "num_base_bdevs_discovered": 3, 00:14:25.463 "num_base_bdevs_operational": 3, 00:14:25.463 "base_bdevs_list": [ 00:14:25.463 { 00:14:25.463 "name": null, 00:14:25.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.463 "is_configured": false, 00:14:25.463 "data_offset": 0, 00:14:25.463 "data_size": 63488 00:14:25.463 }, 00:14:25.463 { 00:14:25.463 "name": "BaseBdev2", 00:14:25.463 "uuid": "7f6fcfe9-910b-5a1f-b0c9-e2b1a5387d64", 00:14:25.463 "is_configured": true, 00:14:25.463 "data_offset": 2048, 00:14:25.463 "data_size": 63488 00:14:25.463 }, 00:14:25.463 { 00:14:25.463 "name": "BaseBdev3", 00:14:25.463 "uuid": "8cca78a1-5a77-5be1-9ebf-8923e22085b5", 00:14:25.463 "is_configured": true, 00:14:25.463 "data_offset": 2048, 00:14:25.463 "data_size": 63488 00:14:25.463 }, 00:14:25.463 { 00:14:25.464 "name": "BaseBdev4", 00:14:25.464 "uuid": "4e19d018-a63e-5077-b1a0-315214b1f3d6", 00:14:25.464 "is_configured": true, 00:14:25.464 "data_offset": 2048, 00:14:25.464 "data_size": 63488 00:14:25.464 } 00:14:25.464 ] 00:14:25.464 }' 00:14:25.464 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.464 11:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.723 11:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:25.723 11:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.723 11:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.723 [2024-12-06 11:57:23.389469] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:25.723 [2024-12-06 11:57:23.392811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000270a0 00:14:25.723 [2024-12-06 11:57:23.395057] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:25.723 11:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.723 11:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:26.660 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.660 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.660 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.660 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.660 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.660 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.660 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.660 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.660 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.920 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.920 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.920 "name": "raid_bdev1", 00:14:26.920 "uuid": "f04ce6a0-2a07-4b86-85d1-6cdbd56cf492", 00:14:26.920 "strip_size_kb": 64, 00:14:26.920 "state": "online", 00:14:26.920 "raid_level": "raid5f", 00:14:26.920 "superblock": true, 00:14:26.920 "num_base_bdevs": 4, 00:14:26.920 "num_base_bdevs_discovered": 4, 00:14:26.920 "num_base_bdevs_operational": 4, 00:14:26.920 "process": { 00:14:26.920 "type": "rebuild", 00:14:26.920 "target": "spare", 00:14:26.920 "progress": { 00:14:26.920 "blocks": 19200, 00:14:26.920 "percent": 10 00:14:26.920 } 00:14:26.920 }, 00:14:26.920 "base_bdevs_list": [ 00:14:26.920 { 00:14:26.920 "name": "spare", 00:14:26.920 "uuid": "ea56203c-9a88-5f1e-be7d-49423af3cbec", 00:14:26.920 "is_configured": true, 00:14:26.920 "data_offset": 2048, 00:14:26.920 "data_size": 63488 00:14:26.920 }, 00:14:26.920 { 00:14:26.920 "name": "BaseBdev2", 00:14:26.920 "uuid": "7f6fcfe9-910b-5a1f-b0c9-e2b1a5387d64", 00:14:26.920 "is_configured": true, 00:14:26.920 "data_offset": 2048, 00:14:26.920 "data_size": 63488 00:14:26.920 }, 00:14:26.920 { 00:14:26.920 "name": "BaseBdev3", 00:14:26.921 "uuid": "8cca78a1-5a77-5be1-9ebf-8923e22085b5", 00:14:26.921 "is_configured": true, 00:14:26.921 "data_offset": 2048, 00:14:26.921 "data_size": 63488 00:14:26.921 }, 00:14:26.921 { 00:14:26.921 "name": "BaseBdev4", 00:14:26.921 "uuid": "4e19d018-a63e-5077-b1a0-315214b1f3d6", 00:14:26.921 "is_configured": true, 00:14:26.921 "data_offset": 2048, 00:14:26.921 "data_size": 63488 00:14:26.921 } 00:14:26.921 ] 00:14:26.921 }' 00:14:26.921 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.921 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.921 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.921 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.921 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:26.921 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.921 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.921 [2024-12-06 11:57:24.557619] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:26.921 [2024-12-06 11:57:24.600371] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:26.921 [2024-12-06 11:57:24.600423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.921 [2024-12-06 11:57:24.600458] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:26.921 [2024-12-06 11:57:24.600468] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:26.921 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.921 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:26.921 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.921 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.921 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.921 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.921 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.921 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.921 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.921 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.921 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.921 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.921 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.921 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.921 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.921 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.921 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.921 "name": "raid_bdev1", 00:14:26.921 "uuid": "f04ce6a0-2a07-4b86-85d1-6cdbd56cf492", 00:14:26.921 "strip_size_kb": 64, 00:14:26.921 "state": "online", 00:14:26.921 "raid_level": "raid5f", 00:14:26.921 "superblock": true, 00:14:26.921 "num_base_bdevs": 4, 00:14:26.921 "num_base_bdevs_discovered": 3, 00:14:26.921 "num_base_bdevs_operational": 3, 00:14:26.921 "base_bdevs_list": [ 00:14:26.921 { 00:14:26.921 "name": null, 00:14:26.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.921 "is_configured": false, 00:14:26.921 "data_offset": 0, 00:14:26.921 "data_size": 63488 00:14:26.921 }, 00:14:26.921 { 00:14:26.921 "name": "BaseBdev2", 00:14:26.921 "uuid": "7f6fcfe9-910b-5a1f-b0c9-e2b1a5387d64", 00:14:26.921 "is_configured": true, 00:14:26.921 "data_offset": 2048, 00:14:26.921 "data_size": 63488 00:14:26.921 }, 00:14:26.921 { 00:14:26.921 "name": "BaseBdev3", 00:14:26.921 "uuid": "8cca78a1-5a77-5be1-9ebf-8923e22085b5", 00:14:26.921 "is_configured": true, 00:14:26.921 "data_offset": 2048, 00:14:26.921 "data_size": 63488 00:14:26.921 }, 00:14:26.921 { 00:14:26.921 "name": "BaseBdev4", 00:14:26.921 "uuid": "4e19d018-a63e-5077-b1a0-315214b1f3d6", 00:14:26.921 "is_configured": true, 00:14:26.921 "data_offset": 2048, 00:14:26.921 "data_size": 63488 00:14:26.921 } 00:14:26.921 ] 00:14:26.921 }' 00:14:26.921 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.921 11:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.489 11:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:27.489 11:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.489 11:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:27.489 11:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:27.489 11:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.489 11:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.489 11:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.489 11:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.489 11:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.489 11:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.489 11:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.489 "name": "raid_bdev1", 00:14:27.489 "uuid": "f04ce6a0-2a07-4b86-85d1-6cdbd56cf492", 00:14:27.489 "strip_size_kb": 64, 00:14:27.489 "state": "online", 00:14:27.489 "raid_level": "raid5f", 00:14:27.489 "superblock": true, 00:14:27.489 "num_base_bdevs": 4, 00:14:27.489 "num_base_bdevs_discovered": 3, 00:14:27.489 "num_base_bdevs_operational": 3, 00:14:27.489 "base_bdevs_list": [ 00:14:27.489 { 00:14:27.489 "name": null, 00:14:27.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.489 "is_configured": false, 00:14:27.489 "data_offset": 0, 00:14:27.489 "data_size": 63488 00:14:27.489 }, 00:14:27.489 { 00:14:27.489 "name": "BaseBdev2", 00:14:27.489 "uuid": "7f6fcfe9-910b-5a1f-b0c9-e2b1a5387d64", 00:14:27.489 "is_configured": true, 00:14:27.489 "data_offset": 2048, 00:14:27.489 "data_size": 63488 00:14:27.489 }, 00:14:27.489 { 00:14:27.489 "name": "BaseBdev3", 00:14:27.490 "uuid": "8cca78a1-5a77-5be1-9ebf-8923e22085b5", 00:14:27.490 "is_configured": true, 00:14:27.490 "data_offset": 2048, 00:14:27.490 "data_size": 63488 00:14:27.490 }, 00:14:27.490 { 00:14:27.490 "name": "BaseBdev4", 00:14:27.490 "uuid": "4e19d018-a63e-5077-b1a0-315214b1f3d6", 00:14:27.490 "is_configured": true, 00:14:27.490 "data_offset": 2048, 00:14:27.490 "data_size": 63488 00:14:27.490 } 00:14:27.490 ] 00:14:27.490 }' 00:14:27.490 11:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.490 11:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:27.490 11:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.490 11:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:27.490 11:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:27.490 11:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.490 11:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.490 [2024-12-06 11:57:25.200710] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:27.490 [2024-12-06 11:57:25.203891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027170 00:14:27.490 [2024-12-06 11:57:25.206017] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:27.490 11:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.490 11:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.869 "name": "raid_bdev1", 00:14:28.869 "uuid": "f04ce6a0-2a07-4b86-85d1-6cdbd56cf492", 00:14:28.869 "strip_size_kb": 64, 00:14:28.869 "state": "online", 00:14:28.869 "raid_level": "raid5f", 00:14:28.869 "superblock": true, 00:14:28.869 "num_base_bdevs": 4, 00:14:28.869 "num_base_bdevs_discovered": 4, 00:14:28.869 "num_base_bdevs_operational": 4, 00:14:28.869 "process": { 00:14:28.869 "type": "rebuild", 00:14:28.869 "target": "spare", 00:14:28.869 "progress": { 00:14:28.869 "blocks": 19200, 00:14:28.869 "percent": 10 00:14:28.869 } 00:14:28.869 }, 00:14:28.869 "base_bdevs_list": [ 00:14:28.869 { 00:14:28.869 "name": "spare", 00:14:28.869 "uuid": "ea56203c-9a88-5f1e-be7d-49423af3cbec", 00:14:28.869 "is_configured": true, 00:14:28.869 "data_offset": 2048, 00:14:28.869 "data_size": 63488 00:14:28.869 }, 00:14:28.869 { 00:14:28.869 "name": "BaseBdev2", 00:14:28.869 "uuid": "7f6fcfe9-910b-5a1f-b0c9-e2b1a5387d64", 00:14:28.869 "is_configured": true, 00:14:28.869 "data_offset": 2048, 00:14:28.869 "data_size": 63488 00:14:28.869 }, 00:14:28.869 { 00:14:28.869 "name": "BaseBdev3", 00:14:28.869 "uuid": "8cca78a1-5a77-5be1-9ebf-8923e22085b5", 00:14:28.869 "is_configured": true, 00:14:28.869 "data_offset": 2048, 00:14:28.869 "data_size": 63488 00:14:28.869 }, 00:14:28.869 { 00:14:28.869 "name": "BaseBdev4", 00:14:28.869 "uuid": "4e19d018-a63e-5077-b1a0-315214b1f3d6", 00:14:28.869 "is_configured": true, 00:14:28.869 "data_offset": 2048, 00:14:28.869 "data_size": 63488 00:14:28.869 } 00:14:28.869 ] 00:14:28.869 }' 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:28.869 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=520 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.869 "name": "raid_bdev1", 00:14:28.869 "uuid": "f04ce6a0-2a07-4b86-85d1-6cdbd56cf492", 00:14:28.869 "strip_size_kb": 64, 00:14:28.869 "state": "online", 00:14:28.869 "raid_level": "raid5f", 00:14:28.869 "superblock": true, 00:14:28.869 "num_base_bdevs": 4, 00:14:28.869 "num_base_bdevs_discovered": 4, 00:14:28.869 "num_base_bdevs_operational": 4, 00:14:28.869 "process": { 00:14:28.869 "type": "rebuild", 00:14:28.869 "target": "spare", 00:14:28.869 "progress": { 00:14:28.869 "blocks": 21120, 00:14:28.869 "percent": 11 00:14:28.869 } 00:14:28.869 }, 00:14:28.869 "base_bdevs_list": [ 00:14:28.869 { 00:14:28.869 "name": "spare", 00:14:28.869 "uuid": "ea56203c-9a88-5f1e-be7d-49423af3cbec", 00:14:28.869 "is_configured": true, 00:14:28.869 "data_offset": 2048, 00:14:28.869 "data_size": 63488 00:14:28.869 }, 00:14:28.869 { 00:14:28.869 "name": "BaseBdev2", 00:14:28.869 "uuid": "7f6fcfe9-910b-5a1f-b0c9-e2b1a5387d64", 00:14:28.869 "is_configured": true, 00:14:28.869 "data_offset": 2048, 00:14:28.869 "data_size": 63488 00:14:28.869 }, 00:14:28.869 { 00:14:28.869 "name": "BaseBdev3", 00:14:28.869 "uuid": "8cca78a1-5a77-5be1-9ebf-8923e22085b5", 00:14:28.869 "is_configured": true, 00:14:28.869 "data_offset": 2048, 00:14:28.869 "data_size": 63488 00:14:28.869 }, 00:14:28.869 { 00:14:28.869 "name": "BaseBdev4", 00:14:28.869 "uuid": "4e19d018-a63e-5077-b1a0-315214b1f3d6", 00:14:28.869 "is_configured": true, 00:14:28.869 "data_offset": 2048, 00:14:28.869 "data_size": 63488 00:14:28.869 } 00:14:28.869 ] 00:14:28.869 }' 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.869 11:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:29.809 11:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:29.809 11:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.809 11:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.809 11:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.809 11:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.809 11:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.809 11:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.809 11:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.809 11:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.809 11:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.809 11:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.809 11:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.809 "name": "raid_bdev1", 00:14:29.809 "uuid": "f04ce6a0-2a07-4b86-85d1-6cdbd56cf492", 00:14:29.809 "strip_size_kb": 64, 00:14:29.809 "state": "online", 00:14:29.809 "raid_level": "raid5f", 00:14:29.809 "superblock": true, 00:14:29.809 "num_base_bdevs": 4, 00:14:29.809 "num_base_bdevs_discovered": 4, 00:14:29.809 "num_base_bdevs_operational": 4, 00:14:29.809 "process": { 00:14:29.809 "type": "rebuild", 00:14:29.809 "target": "spare", 00:14:29.809 "progress": { 00:14:29.809 "blocks": 44160, 00:14:29.809 "percent": 23 00:14:29.809 } 00:14:29.809 }, 00:14:29.809 "base_bdevs_list": [ 00:14:29.809 { 00:14:29.809 "name": "spare", 00:14:29.809 "uuid": "ea56203c-9a88-5f1e-be7d-49423af3cbec", 00:14:29.809 "is_configured": true, 00:14:29.809 "data_offset": 2048, 00:14:29.809 "data_size": 63488 00:14:29.809 }, 00:14:29.809 { 00:14:29.809 "name": "BaseBdev2", 00:14:29.809 "uuid": "7f6fcfe9-910b-5a1f-b0c9-e2b1a5387d64", 00:14:29.809 "is_configured": true, 00:14:29.809 "data_offset": 2048, 00:14:29.809 "data_size": 63488 00:14:29.809 }, 00:14:29.809 { 00:14:29.809 "name": "BaseBdev3", 00:14:29.809 "uuid": "8cca78a1-5a77-5be1-9ebf-8923e22085b5", 00:14:29.809 "is_configured": true, 00:14:29.809 "data_offset": 2048, 00:14:29.809 "data_size": 63488 00:14:29.809 }, 00:14:29.809 { 00:14:29.809 "name": "BaseBdev4", 00:14:29.809 "uuid": "4e19d018-a63e-5077-b1a0-315214b1f3d6", 00:14:29.809 "is_configured": true, 00:14:29.809 "data_offset": 2048, 00:14:29.809 "data_size": 63488 00:14:29.809 } 00:14:29.809 ] 00:14:29.809 }' 00:14:29.809 11:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.069 11:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:30.069 11:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.070 11:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:30.070 11:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:31.010 11:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:31.010 11:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.010 11:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.010 11:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.010 11:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.010 11:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.010 11:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.010 11:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.010 11:57:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.010 11:57:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.010 11:57:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.010 11:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.010 "name": "raid_bdev1", 00:14:31.010 "uuid": "f04ce6a0-2a07-4b86-85d1-6cdbd56cf492", 00:14:31.010 "strip_size_kb": 64, 00:14:31.010 "state": "online", 00:14:31.010 "raid_level": "raid5f", 00:14:31.010 "superblock": true, 00:14:31.010 "num_base_bdevs": 4, 00:14:31.010 "num_base_bdevs_discovered": 4, 00:14:31.010 "num_base_bdevs_operational": 4, 00:14:31.010 "process": { 00:14:31.010 "type": "rebuild", 00:14:31.010 "target": "spare", 00:14:31.010 "progress": { 00:14:31.010 "blocks": 65280, 00:14:31.010 "percent": 34 00:14:31.010 } 00:14:31.010 }, 00:14:31.010 "base_bdevs_list": [ 00:14:31.010 { 00:14:31.010 "name": "spare", 00:14:31.010 "uuid": "ea56203c-9a88-5f1e-be7d-49423af3cbec", 00:14:31.010 "is_configured": true, 00:14:31.010 "data_offset": 2048, 00:14:31.010 "data_size": 63488 00:14:31.010 }, 00:14:31.010 { 00:14:31.010 "name": "BaseBdev2", 00:14:31.010 "uuid": "7f6fcfe9-910b-5a1f-b0c9-e2b1a5387d64", 00:14:31.010 "is_configured": true, 00:14:31.010 "data_offset": 2048, 00:14:31.010 "data_size": 63488 00:14:31.010 }, 00:14:31.010 { 00:14:31.010 "name": "BaseBdev3", 00:14:31.010 "uuid": "8cca78a1-5a77-5be1-9ebf-8923e22085b5", 00:14:31.010 "is_configured": true, 00:14:31.010 "data_offset": 2048, 00:14:31.010 "data_size": 63488 00:14:31.010 }, 00:14:31.010 { 00:14:31.010 "name": "BaseBdev4", 00:14:31.010 "uuid": "4e19d018-a63e-5077-b1a0-315214b1f3d6", 00:14:31.010 "is_configured": true, 00:14:31.010 "data_offset": 2048, 00:14:31.010 "data_size": 63488 00:14:31.010 } 00:14:31.010 ] 00:14:31.010 }' 00:14:31.010 11:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.010 11:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.269 11:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.269 11:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.269 11:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:32.210 11:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:32.210 11:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:32.210 11:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.210 11:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:32.210 11:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:32.210 11:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.210 11:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.210 11:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.210 11:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.210 11:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.210 11:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.210 11:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.210 "name": "raid_bdev1", 00:14:32.210 "uuid": "f04ce6a0-2a07-4b86-85d1-6cdbd56cf492", 00:14:32.210 "strip_size_kb": 64, 00:14:32.210 "state": "online", 00:14:32.210 "raid_level": "raid5f", 00:14:32.210 "superblock": true, 00:14:32.210 "num_base_bdevs": 4, 00:14:32.210 "num_base_bdevs_discovered": 4, 00:14:32.210 "num_base_bdevs_operational": 4, 00:14:32.210 "process": { 00:14:32.210 "type": "rebuild", 00:14:32.210 "target": "spare", 00:14:32.210 "progress": { 00:14:32.210 "blocks": 88320, 00:14:32.210 "percent": 46 00:14:32.210 } 00:14:32.210 }, 00:14:32.210 "base_bdevs_list": [ 00:14:32.210 { 00:14:32.210 "name": "spare", 00:14:32.210 "uuid": "ea56203c-9a88-5f1e-be7d-49423af3cbec", 00:14:32.210 "is_configured": true, 00:14:32.210 "data_offset": 2048, 00:14:32.210 "data_size": 63488 00:14:32.210 }, 00:14:32.210 { 00:14:32.210 "name": "BaseBdev2", 00:14:32.210 "uuid": "7f6fcfe9-910b-5a1f-b0c9-e2b1a5387d64", 00:14:32.210 "is_configured": true, 00:14:32.210 "data_offset": 2048, 00:14:32.210 "data_size": 63488 00:14:32.210 }, 00:14:32.210 { 00:14:32.210 "name": "BaseBdev3", 00:14:32.210 "uuid": "8cca78a1-5a77-5be1-9ebf-8923e22085b5", 00:14:32.210 "is_configured": true, 00:14:32.210 "data_offset": 2048, 00:14:32.210 "data_size": 63488 00:14:32.210 }, 00:14:32.210 { 00:14:32.210 "name": "BaseBdev4", 00:14:32.210 "uuid": "4e19d018-a63e-5077-b1a0-315214b1f3d6", 00:14:32.210 "is_configured": true, 00:14:32.210 "data_offset": 2048, 00:14:32.210 "data_size": 63488 00:14:32.210 } 00:14:32.210 ] 00:14:32.210 }' 00:14:32.210 11:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.210 11:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:32.210 11:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.210 11:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:32.484 11:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:33.427 11:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:33.427 11:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.427 11:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.427 11:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.427 11:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.427 11:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.427 11:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.427 11:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.427 11:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.427 11:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.427 11:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.427 11:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.427 "name": "raid_bdev1", 00:14:33.427 "uuid": "f04ce6a0-2a07-4b86-85d1-6cdbd56cf492", 00:14:33.427 "strip_size_kb": 64, 00:14:33.427 "state": "online", 00:14:33.427 "raid_level": "raid5f", 00:14:33.427 "superblock": true, 00:14:33.427 "num_base_bdevs": 4, 00:14:33.427 "num_base_bdevs_discovered": 4, 00:14:33.427 "num_base_bdevs_operational": 4, 00:14:33.427 "process": { 00:14:33.427 "type": "rebuild", 00:14:33.427 "target": "spare", 00:14:33.427 "progress": { 00:14:33.427 "blocks": 109440, 00:14:33.427 "percent": 57 00:14:33.427 } 00:14:33.427 }, 00:14:33.427 "base_bdevs_list": [ 00:14:33.427 { 00:14:33.427 "name": "spare", 00:14:33.427 "uuid": "ea56203c-9a88-5f1e-be7d-49423af3cbec", 00:14:33.427 "is_configured": true, 00:14:33.427 "data_offset": 2048, 00:14:33.427 "data_size": 63488 00:14:33.427 }, 00:14:33.427 { 00:14:33.427 "name": "BaseBdev2", 00:14:33.427 "uuid": "7f6fcfe9-910b-5a1f-b0c9-e2b1a5387d64", 00:14:33.427 "is_configured": true, 00:14:33.427 "data_offset": 2048, 00:14:33.427 "data_size": 63488 00:14:33.427 }, 00:14:33.427 { 00:14:33.427 "name": "BaseBdev3", 00:14:33.427 "uuid": "8cca78a1-5a77-5be1-9ebf-8923e22085b5", 00:14:33.427 "is_configured": true, 00:14:33.427 "data_offset": 2048, 00:14:33.427 "data_size": 63488 00:14:33.427 }, 00:14:33.427 { 00:14:33.427 "name": "BaseBdev4", 00:14:33.427 "uuid": "4e19d018-a63e-5077-b1a0-315214b1f3d6", 00:14:33.427 "is_configured": true, 00:14:33.427 "data_offset": 2048, 00:14:33.427 "data_size": 63488 00:14:33.427 } 00:14:33.427 ] 00:14:33.427 }' 00:14:33.427 11:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.427 11:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:33.427 11:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.427 11:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.427 11:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:34.810 11:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:34.810 11:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:34.810 11:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.810 11:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:34.810 11:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:34.810 11:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.810 11:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.810 11:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.810 11:57:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.810 11:57:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.810 11:57:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.810 11:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.810 "name": "raid_bdev1", 00:14:34.810 "uuid": "f04ce6a0-2a07-4b86-85d1-6cdbd56cf492", 00:14:34.810 "strip_size_kb": 64, 00:14:34.810 "state": "online", 00:14:34.810 "raid_level": "raid5f", 00:14:34.810 "superblock": true, 00:14:34.810 "num_base_bdevs": 4, 00:14:34.810 "num_base_bdevs_discovered": 4, 00:14:34.810 "num_base_bdevs_operational": 4, 00:14:34.810 "process": { 00:14:34.810 "type": "rebuild", 00:14:34.810 "target": "spare", 00:14:34.810 "progress": { 00:14:34.810 "blocks": 132480, 00:14:34.810 "percent": 69 00:14:34.810 } 00:14:34.810 }, 00:14:34.810 "base_bdevs_list": [ 00:14:34.810 { 00:14:34.810 "name": "spare", 00:14:34.810 "uuid": "ea56203c-9a88-5f1e-be7d-49423af3cbec", 00:14:34.810 "is_configured": true, 00:14:34.810 "data_offset": 2048, 00:14:34.810 "data_size": 63488 00:14:34.810 }, 00:14:34.810 { 00:14:34.810 "name": "BaseBdev2", 00:14:34.810 "uuid": "7f6fcfe9-910b-5a1f-b0c9-e2b1a5387d64", 00:14:34.810 "is_configured": true, 00:14:34.810 "data_offset": 2048, 00:14:34.810 "data_size": 63488 00:14:34.810 }, 00:14:34.810 { 00:14:34.810 "name": "BaseBdev3", 00:14:34.810 "uuid": "8cca78a1-5a77-5be1-9ebf-8923e22085b5", 00:14:34.810 "is_configured": true, 00:14:34.810 "data_offset": 2048, 00:14:34.810 "data_size": 63488 00:14:34.810 }, 00:14:34.810 { 00:14:34.810 "name": "BaseBdev4", 00:14:34.810 "uuid": "4e19d018-a63e-5077-b1a0-315214b1f3d6", 00:14:34.810 "is_configured": true, 00:14:34.810 "data_offset": 2048, 00:14:34.810 "data_size": 63488 00:14:34.810 } 00:14:34.810 ] 00:14:34.810 }' 00:14:34.810 11:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.810 11:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:34.810 11:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.810 11:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:34.810 11:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:35.747 11:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:35.747 11:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.747 11:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.747 11:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.747 11:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.747 11:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.747 11:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.747 11:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.747 11:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.747 11:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.747 11:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.747 11:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.747 "name": "raid_bdev1", 00:14:35.747 "uuid": "f04ce6a0-2a07-4b86-85d1-6cdbd56cf492", 00:14:35.747 "strip_size_kb": 64, 00:14:35.747 "state": "online", 00:14:35.747 "raid_level": "raid5f", 00:14:35.747 "superblock": true, 00:14:35.747 "num_base_bdevs": 4, 00:14:35.747 "num_base_bdevs_discovered": 4, 00:14:35.747 "num_base_bdevs_operational": 4, 00:14:35.747 "process": { 00:14:35.747 "type": "rebuild", 00:14:35.747 "target": "spare", 00:14:35.747 "progress": { 00:14:35.747 "blocks": 153600, 00:14:35.747 "percent": 80 00:14:35.747 } 00:14:35.747 }, 00:14:35.747 "base_bdevs_list": [ 00:14:35.747 { 00:14:35.747 "name": "spare", 00:14:35.747 "uuid": "ea56203c-9a88-5f1e-be7d-49423af3cbec", 00:14:35.747 "is_configured": true, 00:14:35.747 "data_offset": 2048, 00:14:35.747 "data_size": 63488 00:14:35.747 }, 00:14:35.747 { 00:14:35.747 "name": "BaseBdev2", 00:14:35.747 "uuid": "7f6fcfe9-910b-5a1f-b0c9-e2b1a5387d64", 00:14:35.747 "is_configured": true, 00:14:35.747 "data_offset": 2048, 00:14:35.747 "data_size": 63488 00:14:35.747 }, 00:14:35.747 { 00:14:35.747 "name": "BaseBdev3", 00:14:35.747 "uuid": "8cca78a1-5a77-5be1-9ebf-8923e22085b5", 00:14:35.747 "is_configured": true, 00:14:35.747 "data_offset": 2048, 00:14:35.747 "data_size": 63488 00:14:35.747 }, 00:14:35.747 { 00:14:35.747 "name": "BaseBdev4", 00:14:35.747 "uuid": "4e19d018-a63e-5077-b1a0-315214b1f3d6", 00:14:35.747 "is_configured": true, 00:14:35.747 "data_offset": 2048, 00:14:35.747 "data_size": 63488 00:14:35.747 } 00:14:35.747 ] 00:14:35.747 }' 00:14:35.747 11:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.747 11:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.747 11:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.747 11:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.747 11:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:37.128 11:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:37.128 11:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.128 11:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.128 11:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.128 11:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.128 11:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.128 11:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.128 11:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.128 11:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.128 11:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.128 11:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.128 11:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.128 "name": "raid_bdev1", 00:14:37.128 "uuid": "f04ce6a0-2a07-4b86-85d1-6cdbd56cf492", 00:14:37.128 "strip_size_kb": 64, 00:14:37.128 "state": "online", 00:14:37.128 "raid_level": "raid5f", 00:14:37.128 "superblock": true, 00:14:37.128 "num_base_bdevs": 4, 00:14:37.128 "num_base_bdevs_discovered": 4, 00:14:37.128 "num_base_bdevs_operational": 4, 00:14:37.128 "process": { 00:14:37.128 "type": "rebuild", 00:14:37.128 "target": "spare", 00:14:37.128 "progress": { 00:14:37.128 "blocks": 176640, 00:14:37.128 "percent": 92 00:14:37.128 } 00:14:37.128 }, 00:14:37.128 "base_bdevs_list": [ 00:14:37.128 { 00:14:37.128 "name": "spare", 00:14:37.128 "uuid": "ea56203c-9a88-5f1e-be7d-49423af3cbec", 00:14:37.128 "is_configured": true, 00:14:37.128 "data_offset": 2048, 00:14:37.128 "data_size": 63488 00:14:37.128 }, 00:14:37.128 { 00:14:37.128 "name": "BaseBdev2", 00:14:37.128 "uuid": "7f6fcfe9-910b-5a1f-b0c9-e2b1a5387d64", 00:14:37.128 "is_configured": true, 00:14:37.128 "data_offset": 2048, 00:14:37.128 "data_size": 63488 00:14:37.128 }, 00:14:37.128 { 00:14:37.128 "name": "BaseBdev3", 00:14:37.128 "uuid": "8cca78a1-5a77-5be1-9ebf-8923e22085b5", 00:14:37.128 "is_configured": true, 00:14:37.128 "data_offset": 2048, 00:14:37.128 "data_size": 63488 00:14:37.128 }, 00:14:37.128 { 00:14:37.128 "name": "BaseBdev4", 00:14:37.128 "uuid": "4e19d018-a63e-5077-b1a0-315214b1f3d6", 00:14:37.128 "is_configured": true, 00:14:37.128 "data_offset": 2048, 00:14:37.128 "data_size": 63488 00:14:37.128 } 00:14:37.128 ] 00:14:37.128 }' 00:14:37.129 11:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.129 11:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:37.129 11:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.129 11:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:37.129 11:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:37.696 [2024-12-06 11:57:35.244394] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:37.696 [2024-12-06 11:57:35.244460] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:37.696 [2024-12-06 11:57:35.244595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.955 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:37.955 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.955 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.955 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.955 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.955 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.955 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.955 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.955 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.955 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.955 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.955 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.955 "name": "raid_bdev1", 00:14:37.955 "uuid": "f04ce6a0-2a07-4b86-85d1-6cdbd56cf492", 00:14:37.955 "strip_size_kb": 64, 00:14:37.955 "state": "online", 00:14:37.955 "raid_level": "raid5f", 00:14:37.955 "superblock": true, 00:14:37.955 "num_base_bdevs": 4, 00:14:37.955 "num_base_bdevs_discovered": 4, 00:14:37.955 "num_base_bdevs_operational": 4, 00:14:37.955 "base_bdevs_list": [ 00:14:37.955 { 00:14:37.955 "name": "spare", 00:14:37.955 "uuid": "ea56203c-9a88-5f1e-be7d-49423af3cbec", 00:14:37.955 "is_configured": true, 00:14:37.955 "data_offset": 2048, 00:14:37.955 "data_size": 63488 00:14:37.955 }, 00:14:37.955 { 00:14:37.955 "name": "BaseBdev2", 00:14:37.955 "uuid": "7f6fcfe9-910b-5a1f-b0c9-e2b1a5387d64", 00:14:37.955 "is_configured": true, 00:14:37.955 "data_offset": 2048, 00:14:37.955 "data_size": 63488 00:14:37.955 }, 00:14:37.955 { 00:14:37.955 "name": "BaseBdev3", 00:14:37.955 "uuid": "8cca78a1-5a77-5be1-9ebf-8923e22085b5", 00:14:37.955 "is_configured": true, 00:14:37.955 "data_offset": 2048, 00:14:37.955 "data_size": 63488 00:14:37.955 }, 00:14:37.955 { 00:14:37.955 "name": "BaseBdev4", 00:14:37.955 "uuid": "4e19d018-a63e-5077-b1a0-315214b1f3d6", 00:14:37.955 "is_configured": true, 00:14:37.955 "data_offset": 2048, 00:14:37.955 "data_size": 63488 00:14:37.955 } 00:14:37.955 ] 00:14:37.955 }' 00:14:37.955 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.955 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:37.955 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.215 "name": "raid_bdev1", 00:14:38.215 "uuid": "f04ce6a0-2a07-4b86-85d1-6cdbd56cf492", 00:14:38.215 "strip_size_kb": 64, 00:14:38.215 "state": "online", 00:14:38.215 "raid_level": "raid5f", 00:14:38.215 "superblock": true, 00:14:38.215 "num_base_bdevs": 4, 00:14:38.215 "num_base_bdevs_discovered": 4, 00:14:38.215 "num_base_bdevs_operational": 4, 00:14:38.215 "base_bdevs_list": [ 00:14:38.215 { 00:14:38.215 "name": "spare", 00:14:38.215 "uuid": "ea56203c-9a88-5f1e-be7d-49423af3cbec", 00:14:38.215 "is_configured": true, 00:14:38.215 "data_offset": 2048, 00:14:38.215 "data_size": 63488 00:14:38.215 }, 00:14:38.215 { 00:14:38.215 "name": "BaseBdev2", 00:14:38.215 "uuid": "7f6fcfe9-910b-5a1f-b0c9-e2b1a5387d64", 00:14:38.215 "is_configured": true, 00:14:38.215 "data_offset": 2048, 00:14:38.215 "data_size": 63488 00:14:38.215 }, 00:14:38.215 { 00:14:38.215 "name": "BaseBdev3", 00:14:38.215 "uuid": "8cca78a1-5a77-5be1-9ebf-8923e22085b5", 00:14:38.215 "is_configured": true, 00:14:38.215 "data_offset": 2048, 00:14:38.215 "data_size": 63488 00:14:38.215 }, 00:14:38.215 { 00:14:38.215 "name": "BaseBdev4", 00:14:38.215 "uuid": "4e19d018-a63e-5077-b1a0-315214b1f3d6", 00:14:38.215 "is_configured": true, 00:14:38.215 "data_offset": 2048, 00:14:38.215 "data_size": 63488 00:14:38.215 } 00:14:38.215 ] 00:14:38.215 }' 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.215 "name": "raid_bdev1", 00:14:38.215 "uuid": "f04ce6a0-2a07-4b86-85d1-6cdbd56cf492", 00:14:38.215 "strip_size_kb": 64, 00:14:38.215 "state": "online", 00:14:38.215 "raid_level": "raid5f", 00:14:38.215 "superblock": true, 00:14:38.215 "num_base_bdevs": 4, 00:14:38.215 "num_base_bdevs_discovered": 4, 00:14:38.215 "num_base_bdevs_operational": 4, 00:14:38.215 "base_bdevs_list": [ 00:14:38.215 { 00:14:38.215 "name": "spare", 00:14:38.215 "uuid": "ea56203c-9a88-5f1e-be7d-49423af3cbec", 00:14:38.215 "is_configured": true, 00:14:38.215 "data_offset": 2048, 00:14:38.215 "data_size": 63488 00:14:38.215 }, 00:14:38.215 { 00:14:38.215 "name": "BaseBdev2", 00:14:38.215 "uuid": "7f6fcfe9-910b-5a1f-b0c9-e2b1a5387d64", 00:14:38.215 "is_configured": true, 00:14:38.215 "data_offset": 2048, 00:14:38.215 "data_size": 63488 00:14:38.215 }, 00:14:38.215 { 00:14:38.215 "name": "BaseBdev3", 00:14:38.215 "uuid": "8cca78a1-5a77-5be1-9ebf-8923e22085b5", 00:14:38.215 "is_configured": true, 00:14:38.215 "data_offset": 2048, 00:14:38.215 "data_size": 63488 00:14:38.215 }, 00:14:38.215 { 00:14:38.215 "name": "BaseBdev4", 00:14:38.215 "uuid": "4e19d018-a63e-5077-b1a0-315214b1f3d6", 00:14:38.215 "is_configured": true, 00:14:38.215 "data_offset": 2048, 00:14:38.215 "data_size": 63488 00:14:38.215 } 00:14:38.215 ] 00:14:38.215 }' 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.215 11:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.784 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:38.784 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.784 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.784 [2024-12-06 11:57:36.331913] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:38.784 [2024-12-06 11:57:36.331947] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:38.784 [2024-12-06 11:57:36.332018] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:38.784 [2024-12-06 11:57:36.332101] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:38.784 [2024-12-06 11:57:36.332112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:38.784 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.784 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.784 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.784 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:38.784 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.784 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.784 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:38.784 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:38.784 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:38.784 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:38.784 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:38.784 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:38.784 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:38.784 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:38.784 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:38.784 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:38.784 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:38.784 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:38.784 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:39.044 /dev/nbd0 00:14:39.044 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:39.044 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:39.044 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:39.044 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:39.044 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:39.044 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:39.044 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:39.044 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:39.044 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:39.044 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:39.044 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:39.044 1+0 records in 00:14:39.044 1+0 records out 00:14:39.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386128 s, 10.6 MB/s 00:14:39.044 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.044 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:39.044 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.044 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:39.044 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:39.044 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:39.044 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:39.044 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:39.304 /dev/nbd1 00:14:39.304 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:39.304 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:39.304 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:39.304 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:39.304 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:39.304 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:39.304 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:39.304 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:39.304 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:39.304 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:39.304 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:39.304 1+0 records in 00:14:39.304 1+0 records out 00:14:39.304 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266621 s, 15.4 MB/s 00:14:39.304 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.304 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:39.304 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.304 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:39.304 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:39.304 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:39.304 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:39.304 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:39.304 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:39.304 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:39.304 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:39.304 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:39.304 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:39.304 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.304 11:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:39.677 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:39.677 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:39.677 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:39.677 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.677 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.677 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:39.677 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:39.677 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.677 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.678 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:39.678 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:39.678 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:39.678 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:39.678 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.678 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.678 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:39.678 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:39.678 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.678 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:39.678 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:39.678 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.678 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.678 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.678 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:39.678 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.678 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.678 [2024-12-06 11:57:37.361273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:39.678 [2024-12-06 11:57:37.361333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.678 [2024-12-06 11:57:37.361353] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:39.678 [2024-12-06 11:57:37.361364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.678 [2024-12-06 11:57:37.363501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.678 [2024-12-06 11:57:37.363591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:39.678 [2024-12-06 11:57:37.363679] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:39.678 [2024-12-06 11:57:37.363724] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:39.678 [2024-12-06 11:57:37.363842] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:39.678 [2024-12-06 11:57:37.363930] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:39.678 [2024-12-06 11:57:37.363994] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:39.678 spare 00:14:39.678 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.678 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:39.678 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.678 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.949 [2024-12-06 11:57:37.463883] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:14:39.949 [2024-12-06 11:57:37.463912] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:39.949 [2024-12-06 11:57:37.464145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000045820 00:14:39.949 [2024-12-06 11:57:37.464601] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:14:39.949 [2024-12-06 11:57:37.464616] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:14:39.949 [2024-12-06 11:57:37.464752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.949 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.949 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:39.949 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.949 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.949 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.949 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.949 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:39.949 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.949 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.949 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.949 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.949 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.949 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.949 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.949 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.949 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.949 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.949 "name": "raid_bdev1", 00:14:39.949 "uuid": "f04ce6a0-2a07-4b86-85d1-6cdbd56cf492", 00:14:39.949 "strip_size_kb": 64, 00:14:39.949 "state": "online", 00:14:39.949 "raid_level": "raid5f", 00:14:39.949 "superblock": true, 00:14:39.949 "num_base_bdevs": 4, 00:14:39.949 "num_base_bdevs_discovered": 4, 00:14:39.949 "num_base_bdevs_operational": 4, 00:14:39.949 "base_bdevs_list": [ 00:14:39.949 { 00:14:39.949 "name": "spare", 00:14:39.949 "uuid": "ea56203c-9a88-5f1e-be7d-49423af3cbec", 00:14:39.949 "is_configured": true, 00:14:39.949 "data_offset": 2048, 00:14:39.949 "data_size": 63488 00:14:39.949 }, 00:14:39.949 { 00:14:39.949 "name": "BaseBdev2", 00:14:39.949 "uuid": "7f6fcfe9-910b-5a1f-b0c9-e2b1a5387d64", 00:14:39.949 "is_configured": true, 00:14:39.950 "data_offset": 2048, 00:14:39.950 "data_size": 63488 00:14:39.950 }, 00:14:39.950 { 00:14:39.950 "name": "BaseBdev3", 00:14:39.950 "uuid": "8cca78a1-5a77-5be1-9ebf-8923e22085b5", 00:14:39.950 "is_configured": true, 00:14:39.950 "data_offset": 2048, 00:14:39.950 "data_size": 63488 00:14:39.950 }, 00:14:39.950 { 00:14:39.950 "name": "BaseBdev4", 00:14:39.950 "uuid": "4e19d018-a63e-5077-b1a0-315214b1f3d6", 00:14:39.950 "is_configured": true, 00:14:39.950 "data_offset": 2048, 00:14:39.950 "data_size": 63488 00:14:39.950 } 00:14:39.950 ] 00:14:39.950 }' 00:14:39.950 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.950 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.227 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:40.227 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.227 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:40.228 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:40.228 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.228 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.228 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.228 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.228 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.228 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.228 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.228 "name": "raid_bdev1", 00:14:40.228 "uuid": "f04ce6a0-2a07-4b86-85d1-6cdbd56cf492", 00:14:40.228 "strip_size_kb": 64, 00:14:40.228 "state": "online", 00:14:40.228 "raid_level": "raid5f", 00:14:40.228 "superblock": true, 00:14:40.228 "num_base_bdevs": 4, 00:14:40.228 "num_base_bdevs_discovered": 4, 00:14:40.228 "num_base_bdevs_operational": 4, 00:14:40.228 "base_bdevs_list": [ 00:14:40.228 { 00:14:40.228 "name": "spare", 00:14:40.228 "uuid": "ea56203c-9a88-5f1e-be7d-49423af3cbec", 00:14:40.228 "is_configured": true, 00:14:40.228 "data_offset": 2048, 00:14:40.228 "data_size": 63488 00:14:40.228 }, 00:14:40.228 { 00:14:40.228 "name": "BaseBdev2", 00:14:40.228 "uuid": "7f6fcfe9-910b-5a1f-b0c9-e2b1a5387d64", 00:14:40.228 "is_configured": true, 00:14:40.228 "data_offset": 2048, 00:14:40.228 "data_size": 63488 00:14:40.228 }, 00:14:40.228 { 00:14:40.228 "name": "BaseBdev3", 00:14:40.228 "uuid": "8cca78a1-5a77-5be1-9ebf-8923e22085b5", 00:14:40.228 "is_configured": true, 00:14:40.228 "data_offset": 2048, 00:14:40.228 "data_size": 63488 00:14:40.228 }, 00:14:40.228 { 00:14:40.228 "name": "BaseBdev4", 00:14:40.228 "uuid": "4e19d018-a63e-5077-b1a0-315214b1f3d6", 00:14:40.228 "is_configured": true, 00:14:40.228 "data_offset": 2048, 00:14:40.228 "data_size": 63488 00:14:40.228 } 00:14:40.228 ] 00:14:40.228 }' 00:14:40.228 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.228 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:40.228 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.506 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:40.506 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:40.506 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.506 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.506 11:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.506 11:57:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.506 11:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.506 11:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:40.506 11:57:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.506 11:57:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.506 [2024-12-06 11:57:38.033325] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:40.506 11:57:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.506 11:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:40.506 11:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.506 11:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.506 11:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.506 11:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.506 11:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.506 11:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.506 11:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.506 11:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.506 11:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.506 11:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.506 11:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.506 11:57:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.506 11:57:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.506 11:57:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.506 11:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.506 "name": "raid_bdev1", 00:14:40.506 "uuid": "f04ce6a0-2a07-4b86-85d1-6cdbd56cf492", 00:14:40.506 "strip_size_kb": 64, 00:14:40.506 "state": "online", 00:14:40.506 "raid_level": "raid5f", 00:14:40.506 "superblock": true, 00:14:40.506 "num_base_bdevs": 4, 00:14:40.506 "num_base_bdevs_discovered": 3, 00:14:40.506 "num_base_bdevs_operational": 3, 00:14:40.506 "base_bdevs_list": [ 00:14:40.506 { 00:14:40.506 "name": null, 00:14:40.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.506 "is_configured": false, 00:14:40.506 "data_offset": 0, 00:14:40.506 "data_size": 63488 00:14:40.506 }, 00:14:40.506 { 00:14:40.506 "name": "BaseBdev2", 00:14:40.506 "uuid": "7f6fcfe9-910b-5a1f-b0c9-e2b1a5387d64", 00:14:40.506 "is_configured": true, 00:14:40.506 "data_offset": 2048, 00:14:40.506 "data_size": 63488 00:14:40.506 }, 00:14:40.506 { 00:14:40.506 "name": "BaseBdev3", 00:14:40.506 "uuid": "8cca78a1-5a77-5be1-9ebf-8923e22085b5", 00:14:40.506 "is_configured": true, 00:14:40.506 "data_offset": 2048, 00:14:40.506 "data_size": 63488 00:14:40.506 }, 00:14:40.506 { 00:14:40.506 "name": "BaseBdev4", 00:14:40.506 "uuid": "4e19d018-a63e-5077-b1a0-315214b1f3d6", 00:14:40.506 "is_configured": true, 00:14:40.506 "data_offset": 2048, 00:14:40.506 "data_size": 63488 00:14:40.506 } 00:14:40.506 ] 00:14:40.506 }' 00:14:40.506 11:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.506 11:57:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.766 11:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:40.766 11:57:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.766 11:57:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.766 [2024-12-06 11:57:38.492564] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:40.766 [2024-12-06 11:57:38.492778] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:40.766 [2024-12-06 11:57:38.492834] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:40.766 [2024-12-06 11:57:38.492912] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:40.766 [2024-12-06 11:57:38.496092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000458f0 00:14:40.766 [2024-12-06 11:57:38.498234] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:40.766 11:57:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.766 11:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:42.150 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.150 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.150 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.150 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.150 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.150 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.150 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.150 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.150 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.150 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.150 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.150 "name": "raid_bdev1", 00:14:42.150 "uuid": "f04ce6a0-2a07-4b86-85d1-6cdbd56cf492", 00:14:42.150 "strip_size_kb": 64, 00:14:42.150 "state": "online", 00:14:42.150 "raid_level": "raid5f", 00:14:42.150 "superblock": true, 00:14:42.150 "num_base_bdevs": 4, 00:14:42.150 "num_base_bdevs_discovered": 4, 00:14:42.150 "num_base_bdevs_operational": 4, 00:14:42.150 "process": { 00:14:42.150 "type": "rebuild", 00:14:42.150 "target": "spare", 00:14:42.150 "progress": { 00:14:42.150 "blocks": 19200, 00:14:42.150 "percent": 10 00:14:42.150 } 00:14:42.150 }, 00:14:42.150 "base_bdevs_list": [ 00:14:42.150 { 00:14:42.150 "name": "spare", 00:14:42.150 "uuid": "ea56203c-9a88-5f1e-be7d-49423af3cbec", 00:14:42.150 "is_configured": true, 00:14:42.150 "data_offset": 2048, 00:14:42.150 "data_size": 63488 00:14:42.150 }, 00:14:42.150 { 00:14:42.150 "name": "BaseBdev2", 00:14:42.150 "uuid": "7f6fcfe9-910b-5a1f-b0c9-e2b1a5387d64", 00:14:42.150 "is_configured": true, 00:14:42.150 "data_offset": 2048, 00:14:42.150 "data_size": 63488 00:14:42.150 }, 00:14:42.150 { 00:14:42.150 "name": "BaseBdev3", 00:14:42.150 "uuid": "8cca78a1-5a77-5be1-9ebf-8923e22085b5", 00:14:42.150 "is_configured": true, 00:14:42.150 "data_offset": 2048, 00:14:42.150 "data_size": 63488 00:14:42.150 }, 00:14:42.150 { 00:14:42.150 "name": "BaseBdev4", 00:14:42.150 "uuid": "4e19d018-a63e-5077-b1a0-315214b1f3d6", 00:14:42.150 "is_configured": true, 00:14:42.150 "data_offset": 2048, 00:14:42.150 "data_size": 63488 00:14:42.150 } 00:14:42.150 ] 00:14:42.150 }' 00:14:42.150 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.150 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.150 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.150 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.150 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:42.150 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.150 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.150 [2024-12-06 11:57:39.632879] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:42.150 [2024-12-06 11:57:39.703251] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:42.150 [2024-12-06 11:57:39.703303] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.150 [2024-12-06 11:57:39.703321] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:42.150 [2024-12-06 11:57:39.703328] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:42.150 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.150 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:42.150 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.150 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.150 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.150 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.150 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.151 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.151 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.151 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.151 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.151 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.151 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.151 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.151 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.151 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.151 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.151 "name": "raid_bdev1", 00:14:42.151 "uuid": "f04ce6a0-2a07-4b86-85d1-6cdbd56cf492", 00:14:42.151 "strip_size_kb": 64, 00:14:42.151 "state": "online", 00:14:42.151 "raid_level": "raid5f", 00:14:42.151 "superblock": true, 00:14:42.151 "num_base_bdevs": 4, 00:14:42.151 "num_base_bdevs_discovered": 3, 00:14:42.151 "num_base_bdevs_operational": 3, 00:14:42.151 "base_bdevs_list": [ 00:14:42.151 { 00:14:42.151 "name": null, 00:14:42.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.151 "is_configured": false, 00:14:42.151 "data_offset": 0, 00:14:42.151 "data_size": 63488 00:14:42.151 }, 00:14:42.151 { 00:14:42.151 "name": "BaseBdev2", 00:14:42.151 "uuid": "7f6fcfe9-910b-5a1f-b0c9-e2b1a5387d64", 00:14:42.151 "is_configured": true, 00:14:42.151 "data_offset": 2048, 00:14:42.151 "data_size": 63488 00:14:42.151 }, 00:14:42.151 { 00:14:42.151 "name": "BaseBdev3", 00:14:42.151 "uuid": "8cca78a1-5a77-5be1-9ebf-8923e22085b5", 00:14:42.151 "is_configured": true, 00:14:42.151 "data_offset": 2048, 00:14:42.151 "data_size": 63488 00:14:42.151 }, 00:14:42.151 { 00:14:42.151 "name": "BaseBdev4", 00:14:42.151 "uuid": "4e19d018-a63e-5077-b1a0-315214b1f3d6", 00:14:42.151 "is_configured": true, 00:14:42.151 "data_offset": 2048, 00:14:42.151 "data_size": 63488 00:14:42.151 } 00:14:42.151 ] 00:14:42.151 }' 00:14:42.151 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.151 11:57:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.721 11:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:42.721 11:57:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.721 11:57:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.721 [2024-12-06 11:57:40.183364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:42.721 [2024-12-06 11:57:40.183453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.721 [2024-12-06 11:57:40.183495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:14:42.721 [2024-12-06 11:57:40.183523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.721 [2024-12-06 11:57:40.183932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.721 [2024-12-06 11:57:40.183985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:42.721 [2024-12-06 11:57:40.184082] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:42.721 [2024-12-06 11:57:40.184119] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:42.721 [2024-12-06 11:57:40.184179] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:42.721 [2024-12-06 11:57:40.184263] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:42.721 [2024-12-06 11:57:40.186845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000459c0 00:14:42.721 [2024-12-06 11:57:40.188969] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:42.721 spare 00:14:42.721 11:57:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.721 11:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.662 "name": "raid_bdev1", 00:14:43.662 "uuid": "f04ce6a0-2a07-4b86-85d1-6cdbd56cf492", 00:14:43.662 "strip_size_kb": 64, 00:14:43.662 "state": "online", 00:14:43.662 "raid_level": "raid5f", 00:14:43.662 "superblock": true, 00:14:43.662 "num_base_bdevs": 4, 00:14:43.662 "num_base_bdevs_discovered": 4, 00:14:43.662 "num_base_bdevs_operational": 4, 00:14:43.662 "process": { 00:14:43.662 "type": "rebuild", 00:14:43.662 "target": "spare", 00:14:43.662 "progress": { 00:14:43.662 "blocks": 19200, 00:14:43.662 "percent": 10 00:14:43.662 } 00:14:43.662 }, 00:14:43.662 "base_bdevs_list": [ 00:14:43.662 { 00:14:43.662 "name": "spare", 00:14:43.662 "uuid": "ea56203c-9a88-5f1e-be7d-49423af3cbec", 00:14:43.662 "is_configured": true, 00:14:43.662 "data_offset": 2048, 00:14:43.662 "data_size": 63488 00:14:43.662 }, 00:14:43.662 { 00:14:43.662 "name": "BaseBdev2", 00:14:43.662 "uuid": "7f6fcfe9-910b-5a1f-b0c9-e2b1a5387d64", 00:14:43.662 "is_configured": true, 00:14:43.662 "data_offset": 2048, 00:14:43.662 "data_size": 63488 00:14:43.662 }, 00:14:43.662 { 00:14:43.662 "name": "BaseBdev3", 00:14:43.662 "uuid": "8cca78a1-5a77-5be1-9ebf-8923e22085b5", 00:14:43.662 "is_configured": true, 00:14:43.662 "data_offset": 2048, 00:14:43.662 "data_size": 63488 00:14:43.662 }, 00:14:43.662 { 00:14:43.662 "name": "BaseBdev4", 00:14:43.662 "uuid": "4e19d018-a63e-5077-b1a0-315214b1f3d6", 00:14:43.662 "is_configured": true, 00:14:43.662 "data_offset": 2048, 00:14:43.662 "data_size": 63488 00:14:43.662 } 00:14:43.662 ] 00:14:43.662 }' 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.662 [2024-12-06 11:57:41.335889] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:43.662 [2024-12-06 11:57:41.393937] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:43.662 [2024-12-06 11:57:41.393994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.662 [2024-12-06 11:57:41.394009] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:43.662 [2024-12-06 11:57:41.394017] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.662 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.921 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.921 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.921 "name": "raid_bdev1", 00:14:43.921 "uuid": "f04ce6a0-2a07-4b86-85d1-6cdbd56cf492", 00:14:43.921 "strip_size_kb": 64, 00:14:43.921 "state": "online", 00:14:43.921 "raid_level": "raid5f", 00:14:43.921 "superblock": true, 00:14:43.921 "num_base_bdevs": 4, 00:14:43.921 "num_base_bdevs_discovered": 3, 00:14:43.921 "num_base_bdevs_operational": 3, 00:14:43.921 "base_bdevs_list": [ 00:14:43.921 { 00:14:43.921 "name": null, 00:14:43.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.921 "is_configured": false, 00:14:43.921 "data_offset": 0, 00:14:43.921 "data_size": 63488 00:14:43.921 }, 00:14:43.921 { 00:14:43.921 "name": "BaseBdev2", 00:14:43.921 "uuid": "7f6fcfe9-910b-5a1f-b0c9-e2b1a5387d64", 00:14:43.921 "is_configured": true, 00:14:43.921 "data_offset": 2048, 00:14:43.921 "data_size": 63488 00:14:43.921 }, 00:14:43.921 { 00:14:43.921 "name": "BaseBdev3", 00:14:43.921 "uuid": "8cca78a1-5a77-5be1-9ebf-8923e22085b5", 00:14:43.921 "is_configured": true, 00:14:43.921 "data_offset": 2048, 00:14:43.921 "data_size": 63488 00:14:43.921 }, 00:14:43.921 { 00:14:43.921 "name": "BaseBdev4", 00:14:43.921 "uuid": "4e19d018-a63e-5077-b1a0-315214b1f3d6", 00:14:43.921 "is_configured": true, 00:14:43.921 "data_offset": 2048, 00:14:43.921 "data_size": 63488 00:14:43.921 } 00:14:43.921 ] 00:14:43.921 }' 00:14:43.921 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.921 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.181 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:44.181 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.181 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:44.181 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:44.181 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.181 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.181 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.181 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.181 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.181 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.181 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.181 "name": "raid_bdev1", 00:14:44.181 "uuid": "f04ce6a0-2a07-4b86-85d1-6cdbd56cf492", 00:14:44.181 "strip_size_kb": 64, 00:14:44.181 "state": "online", 00:14:44.181 "raid_level": "raid5f", 00:14:44.181 "superblock": true, 00:14:44.181 "num_base_bdevs": 4, 00:14:44.181 "num_base_bdevs_discovered": 3, 00:14:44.181 "num_base_bdevs_operational": 3, 00:14:44.181 "base_bdevs_list": [ 00:14:44.181 { 00:14:44.181 "name": null, 00:14:44.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.181 "is_configured": false, 00:14:44.181 "data_offset": 0, 00:14:44.181 "data_size": 63488 00:14:44.181 }, 00:14:44.181 { 00:14:44.181 "name": "BaseBdev2", 00:14:44.181 "uuid": "7f6fcfe9-910b-5a1f-b0c9-e2b1a5387d64", 00:14:44.181 "is_configured": true, 00:14:44.181 "data_offset": 2048, 00:14:44.181 "data_size": 63488 00:14:44.181 }, 00:14:44.181 { 00:14:44.181 "name": "BaseBdev3", 00:14:44.181 "uuid": "8cca78a1-5a77-5be1-9ebf-8923e22085b5", 00:14:44.181 "is_configured": true, 00:14:44.181 "data_offset": 2048, 00:14:44.181 "data_size": 63488 00:14:44.181 }, 00:14:44.181 { 00:14:44.181 "name": "BaseBdev4", 00:14:44.181 "uuid": "4e19d018-a63e-5077-b1a0-315214b1f3d6", 00:14:44.181 "is_configured": true, 00:14:44.181 "data_offset": 2048, 00:14:44.181 "data_size": 63488 00:14:44.181 } 00:14:44.181 ] 00:14:44.181 }' 00:14:44.181 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.181 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:44.181 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.442 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:44.442 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:44.442 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.442 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.442 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.442 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:44.442 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.442 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.442 [2024-12-06 11:57:41.993647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:44.442 [2024-12-06 11:57:41.993700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.442 [2024-12-06 11:57:41.993717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:44.442 [2024-12-06 11:57:41.993728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.442 [2024-12-06 11:57:41.994106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.442 [2024-12-06 11:57:41.994125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:44.442 [2024-12-06 11:57:41.994187] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:44.442 [2024-12-06 11:57:41.994204] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:44.442 [2024-12-06 11:57:41.994212] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:44.442 [2024-12-06 11:57:41.994226] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:44.442 BaseBdev1 00:14:44.442 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.442 11:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:45.383 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:45.383 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.383 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.383 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.383 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.383 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.383 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.383 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.383 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.383 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.383 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.383 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.383 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.383 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.383 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.383 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.383 "name": "raid_bdev1", 00:14:45.383 "uuid": "f04ce6a0-2a07-4b86-85d1-6cdbd56cf492", 00:14:45.383 "strip_size_kb": 64, 00:14:45.383 "state": "online", 00:14:45.383 "raid_level": "raid5f", 00:14:45.383 "superblock": true, 00:14:45.383 "num_base_bdevs": 4, 00:14:45.383 "num_base_bdevs_discovered": 3, 00:14:45.383 "num_base_bdevs_operational": 3, 00:14:45.383 "base_bdevs_list": [ 00:14:45.383 { 00:14:45.383 "name": null, 00:14:45.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.383 "is_configured": false, 00:14:45.383 "data_offset": 0, 00:14:45.383 "data_size": 63488 00:14:45.383 }, 00:14:45.383 { 00:14:45.383 "name": "BaseBdev2", 00:14:45.383 "uuid": "7f6fcfe9-910b-5a1f-b0c9-e2b1a5387d64", 00:14:45.383 "is_configured": true, 00:14:45.383 "data_offset": 2048, 00:14:45.383 "data_size": 63488 00:14:45.383 }, 00:14:45.383 { 00:14:45.383 "name": "BaseBdev3", 00:14:45.383 "uuid": "8cca78a1-5a77-5be1-9ebf-8923e22085b5", 00:14:45.383 "is_configured": true, 00:14:45.383 "data_offset": 2048, 00:14:45.383 "data_size": 63488 00:14:45.383 }, 00:14:45.383 { 00:14:45.383 "name": "BaseBdev4", 00:14:45.383 "uuid": "4e19d018-a63e-5077-b1a0-315214b1f3d6", 00:14:45.383 "is_configured": true, 00:14:45.383 "data_offset": 2048, 00:14:45.383 "data_size": 63488 00:14:45.383 } 00:14:45.383 ] 00:14:45.383 }' 00:14:45.384 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.384 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.954 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:45.954 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.954 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:45.954 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:45.954 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.954 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.954 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.954 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.954 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.954 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.954 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.954 "name": "raid_bdev1", 00:14:45.954 "uuid": "f04ce6a0-2a07-4b86-85d1-6cdbd56cf492", 00:14:45.954 "strip_size_kb": 64, 00:14:45.954 "state": "online", 00:14:45.954 "raid_level": "raid5f", 00:14:45.954 "superblock": true, 00:14:45.954 "num_base_bdevs": 4, 00:14:45.954 "num_base_bdevs_discovered": 3, 00:14:45.954 "num_base_bdevs_operational": 3, 00:14:45.954 "base_bdevs_list": [ 00:14:45.954 { 00:14:45.954 "name": null, 00:14:45.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.954 "is_configured": false, 00:14:45.954 "data_offset": 0, 00:14:45.954 "data_size": 63488 00:14:45.954 }, 00:14:45.954 { 00:14:45.954 "name": "BaseBdev2", 00:14:45.954 "uuid": "7f6fcfe9-910b-5a1f-b0c9-e2b1a5387d64", 00:14:45.954 "is_configured": true, 00:14:45.954 "data_offset": 2048, 00:14:45.954 "data_size": 63488 00:14:45.954 }, 00:14:45.954 { 00:14:45.954 "name": "BaseBdev3", 00:14:45.954 "uuid": "8cca78a1-5a77-5be1-9ebf-8923e22085b5", 00:14:45.954 "is_configured": true, 00:14:45.954 "data_offset": 2048, 00:14:45.954 "data_size": 63488 00:14:45.954 }, 00:14:45.954 { 00:14:45.954 "name": "BaseBdev4", 00:14:45.954 "uuid": "4e19d018-a63e-5077-b1a0-315214b1f3d6", 00:14:45.954 "is_configured": true, 00:14:45.954 "data_offset": 2048, 00:14:45.954 "data_size": 63488 00:14:45.954 } 00:14:45.954 ] 00:14:45.954 }' 00:14:45.954 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.954 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:45.954 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.954 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:45.954 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:45.954 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:45.954 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:45.954 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:45.954 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.954 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:45.954 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.954 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:45.954 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.954 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.954 [2024-12-06 11:57:43.583024] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.954 [2024-12-06 11:57:43.583228] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:45.954 [2024-12-06 11:57:43.583298] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:45.954 request: 00:14:45.954 { 00:14:45.954 "base_bdev": "BaseBdev1", 00:14:45.954 "raid_bdev": "raid_bdev1", 00:14:45.954 "method": "bdev_raid_add_base_bdev", 00:14:45.954 "req_id": 1 00:14:45.954 } 00:14:45.954 Got JSON-RPC error response 00:14:45.954 response: 00:14:45.954 { 00:14:45.954 "code": -22, 00:14:45.954 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:45.954 } 00:14:45.954 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:45.954 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:45.954 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:45.955 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:45.955 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:45.955 11:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:46.896 11:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:46.896 11:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.896 11:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.896 11:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.896 11:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.896 11:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.896 11:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.896 11:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.896 11:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.896 11:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.896 11:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.896 11:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.896 11:57:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.896 11:57:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.896 11:57:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.896 11:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.896 "name": "raid_bdev1", 00:14:46.896 "uuid": "f04ce6a0-2a07-4b86-85d1-6cdbd56cf492", 00:14:46.896 "strip_size_kb": 64, 00:14:46.896 "state": "online", 00:14:46.896 "raid_level": "raid5f", 00:14:46.896 "superblock": true, 00:14:46.896 "num_base_bdevs": 4, 00:14:46.896 "num_base_bdevs_discovered": 3, 00:14:46.897 "num_base_bdevs_operational": 3, 00:14:46.897 "base_bdevs_list": [ 00:14:46.897 { 00:14:46.897 "name": null, 00:14:46.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.897 "is_configured": false, 00:14:46.897 "data_offset": 0, 00:14:46.897 "data_size": 63488 00:14:46.897 }, 00:14:46.897 { 00:14:46.897 "name": "BaseBdev2", 00:14:46.897 "uuid": "7f6fcfe9-910b-5a1f-b0c9-e2b1a5387d64", 00:14:46.897 "is_configured": true, 00:14:46.897 "data_offset": 2048, 00:14:46.897 "data_size": 63488 00:14:46.897 }, 00:14:46.897 { 00:14:46.897 "name": "BaseBdev3", 00:14:46.897 "uuid": "8cca78a1-5a77-5be1-9ebf-8923e22085b5", 00:14:46.897 "is_configured": true, 00:14:46.897 "data_offset": 2048, 00:14:46.897 "data_size": 63488 00:14:46.897 }, 00:14:46.897 { 00:14:46.897 "name": "BaseBdev4", 00:14:46.897 "uuid": "4e19d018-a63e-5077-b1a0-315214b1f3d6", 00:14:46.897 "is_configured": true, 00:14:46.897 "data_offset": 2048, 00:14:46.897 "data_size": 63488 00:14:46.897 } 00:14:46.897 ] 00:14:46.897 }' 00:14:47.157 11:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.157 11:57:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.418 11:57:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:47.418 11:57:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.418 11:57:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:47.418 11:57:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:47.418 11:57:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.418 11:57:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.418 11:57:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.418 11:57:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.418 11:57:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.418 11:57:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.418 11:57:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.418 "name": "raid_bdev1", 00:14:47.418 "uuid": "f04ce6a0-2a07-4b86-85d1-6cdbd56cf492", 00:14:47.418 "strip_size_kb": 64, 00:14:47.418 "state": "online", 00:14:47.418 "raid_level": "raid5f", 00:14:47.418 "superblock": true, 00:14:47.418 "num_base_bdevs": 4, 00:14:47.418 "num_base_bdevs_discovered": 3, 00:14:47.418 "num_base_bdevs_operational": 3, 00:14:47.418 "base_bdevs_list": [ 00:14:47.418 { 00:14:47.418 "name": null, 00:14:47.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.418 "is_configured": false, 00:14:47.418 "data_offset": 0, 00:14:47.418 "data_size": 63488 00:14:47.418 }, 00:14:47.418 { 00:14:47.418 "name": "BaseBdev2", 00:14:47.418 "uuid": "7f6fcfe9-910b-5a1f-b0c9-e2b1a5387d64", 00:14:47.418 "is_configured": true, 00:14:47.418 "data_offset": 2048, 00:14:47.418 "data_size": 63488 00:14:47.418 }, 00:14:47.418 { 00:14:47.418 "name": "BaseBdev3", 00:14:47.418 "uuid": "8cca78a1-5a77-5be1-9ebf-8923e22085b5", 00:14:47.418 "is_configured": true, 00:14:47.418 "data_offset": 2048, 00:14:47.418 "data_size": 63488 00:14:47.418 }, 00:14:47.418 { 00:14:47.418 "name": "BaseBdev4", 00:14:47.418 "uuid": "4e19d018-a63e-5077-b1a0-315214b1f3d6", 00:14:47.418 "is_configured": true, 00:14:47.418 "data_offset": 2048, 00:14:47.418 "data_size": 63488 00:14:47.418 } 00:14:47.418 ] 00:14:47.418 }' 00:14:47.418 11:57:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.418 11:57:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:47.418 11:57:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.678 11:57:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:47.678 11:57:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 95111 00:14:47.678 11:57:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 95111 ']' 00:14:47.678 11:57:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 95111 00:14:47.678 11:57:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:47.678 11:57:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:47.678 11:57:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95111 00:14:47.678 killing process with pid 95111 00:14:47.678 Received shutdown signal, test time was about 60.000000 seconds 00:14:47.678 00:14:47.678 Latency(us) 00:14:47.678 [2024-12-06T11:57:45.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.678 [2024-12-06T11:57:45.436Z] =================================================================================================================== 00:14:47.678 [2024-12-06T11:57:45.436Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:47.678 11:57:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:47.678 11:57:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:47.678 11:57:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95111' 00:14:47.678 11:57:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 95111 00:14:47.678 [2024-12-06 11:57:45.233844] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:47.678 [2024-12-06 11:57:45.233955] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:47.678 11:57:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 95111 00:14:47.678 [2024-12-06 11:57:45.234029] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:47.678 [2024-12-06 11:57:45.234038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:14:47.678 [2024-12-06 11:57:45.284878] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:47.938 11:57:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:47.938 00:14:47.938 real 0m25.118s 00:14:47.938 user 0m31.969s 00:14:47.938 sys 0m2.984s 00:14:47.938 11:57:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:47.938 ************************************ 00:14:47.938 END TEST raid5f_rebuild_test_sb 00:14:47.938 ************************************ 00:14:47.938 11:57:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.938 11:57:45 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:14:47.938 11:57:45 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:14:47.938 11:57:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:47.938 11:57:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:47.938 11:57:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:47.938 ************************************ 00:14:47.938 START TEST raid_state_function_test_sb_4k 00:14:47.938 ************************************ 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=95903 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:47.938 Process raid pid: 95903 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 95903' 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 95903 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 95903 ']' 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:47.938 11:57:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:47.938 [2024-12-06 11:57:45.683614] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:47.938 [2024-12-06 11:57:45.683852] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.198 [2024-12-06 11:57:45.809316] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.198 [2024-12-06 11:57:45.851528] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.198 [2024-12-06 11:57:45.893492] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:48.198 [2024-12-06 11:57:45.893529] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:48.768 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:48.768 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:14:48.768 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:48.768 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.768 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:48.768 [2024-12-06 11:57:46.490637] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:48.768 [2024-12-06 11:57:46.490686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:48.768 [2024-12-06 11:57:46.490698] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:48.768 [2024-12-06 11:57:46.490708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:48.768 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.768 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:48.768 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.768 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.768 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:48.768 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:48.768 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:48.768 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.768 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.768 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.768 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.768 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.768 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.768 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.768 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:48.768 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.028 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.028 "name": "Existed_Raid", 00:14:49.028 "uuid": "ced7e413-1e15-4b2e-ade2-e68e3078b161", 00:14:49.028 "strip_size_kb": 0, 00:14:49.028 "state": "configuring", 00:14:49.028 "raid_level": "raid1", 00:14:49.028 "superblock": true, 00:14:49.028 "num_base_bdevs": 2, 00:14:49.028 "num_base_bdevs_discovered": 0, 00:14:49.028 "num_base_bdevs_operational": 2, 00:14:49.028 "base_bdevs_list": [ 00:14:49.028 { 00:14:49.028 "name": "BaseBdev1", 00:14:49.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.028 "is_configured": false, 00:14:49.028 "data_offset": 0, 00:14:49.028 "data_size": 0 00:14:49.028 }, 00:14:49.028 { 00:14:49.028 "name": "BaseBdev2", 00:14:49.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.028 "is_configured": false, 00:14:49.028 "data_offset": 0, 00:14:49.028 "data_size": 0 00:14:49.028 } 00:14:49.028 ] 00:14:49.028 }' 00:14:49.028 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.028 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:49.287 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:49.287 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.287 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:49.287 [2024-12-06 11:57:46.929802] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:49.287 [2024-12-06 11:57:46.929906] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:14:49.287 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.287 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:49.287 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.287 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:49.287 [2024-12-06 11:57:46.937805] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:49.287 [2024-12-06 11:57:46.937894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:49.287 [2024-12-06 11:57:46.937933] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:49.287 [2024-12-06 11:57:46.937956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:49.287 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.287 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:14:49.287 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.287 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:49.287 [2024-12-06 11:57:46.954564] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:49.287 BaseBdev1 00:14:49.287 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.287 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:49.287 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:49.287 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:49.287 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:14:49.287 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:49.287 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:49.287 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:49.287 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.287 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:49.287 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.287 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:49.287 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.287 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:49.287 [ 00:14:49.287 { 00:14:49.287 "name": "BaseBdev1", 00:14:49.287 "aliases": [ 00:14:49.287 "d5258b5d-bde1-4db2-b6db-76cccfacb4a8" 00:14:49.287 ], 00:14:49.287 "product_name": "Malloc disk", 00:14:49.287 "block_size": 4096, 00:14:49.287 "num_blocks": 8192, 00:14:49.287 "uuid": "d5258b5d-bde1-4db2-b6db-76cccfacb4a8", 00:14:49.287 "assigned_rate_limits": { 00:14:49.287 "rw_ios_per_sec": 0, 00:14:49.287 "rw_mbytes_per_sec": 0, 00:14:49.287 "r_mbytes_per_sec": 0, 00:14:49.287 "w_mbytes_per_sec": 0 00:14:49.287 }, 00:14:49.287 "claimed": true, 00:14:49.287 "claim_type": "exclusive_write", 00:14:49.287 "zoned": false, 00:14:49.287 "supported_io_types": { 00:14:49.287 "read": true, 00:14:49.287 "write": true, 00:14:49.287 "unmap": true, 00:14:49.287 "flush": true, 00:14:49.287 "reset": true, 00:14:49.287 "nvme_admin": false, 00:14:49.287 "nvme_io": false, 00:14:49.287 "nvme_io_md": false, 00:14:49.287 "write_zeroes": true, 00:14:49.287 "zcopy": true, 00:14:49.287 "get_zone_info": false, 00:14:49.287 "zone_management": false, 00:14:49.288 "zone_append": false, 00:14:49.288 "compare": false, 00:14:49.288 "compare_and_write": false, 00:14:49.288 "abort": true, 00:14:49.288 "seek_hole": false, 00:14:49.288 "seek_data": false, 00:14:49.288 "copy": true, 00:14:49.288 "nvme_iov_md": false 00:14:49.288 }, 00:14:49.288 "memory_domains": [ 00:14:49.288 { 00:14:49.288 "dma_device_id": "system", 00:14:49.288 "dma_device_type": 1 00:14:49.288 }, 00:14:49.288 { 00:14:49.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.288 "dma_device_type": 2 00:14:49.288 } 00:14:49.288 ], 00:14:49.288 "driver_specific": {} 00:14:49.288 } 00:14:49.288 ] 00:14:49.288 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.288 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:14:49.288 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:49.288 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.288 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.288 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.288 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.288 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:49.288 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.288 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.288 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.288 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.288 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.288 11:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.288 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.288 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:49.288 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.547 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.547 "name": "Existed_Raid", 00:14:49.547 "uuid": "10674a58-7393-4759-9ba6-94c59f5a5f75", 00:14:49.547 "strip_size_kb": 0, 00:14:49.547 "state": "configuring", 00:14:49.547 "raid_level": "raid1", 00:14:49.547 "superblock": true, 00:14:49.547 "num_base_bdevs": 2, 00:14:49.547 "num_base_bdevs_discovered": 1, 00:14:49.547 "num_base_bdevs_operational": 2, 00:14:49.547 "base_bdevs_list": [ 00:14:49.547 { 00:14:49.547 "name": "BaseBdev1", 00:14:49.547 "uuid": "d5258b5d-bde1-4db2-b6db-76cccfacb4a8", 00:14:49.547 "is_configured": true, 00:14:49.547 "data_offset": 256, 00:14:49.547 "data_size": 7936 00:14:49.547 }, 00:14:49.548 { 00:14:49.548 "name": "BaseBdev2", 00:14:49.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.548 "is_configured": false, 00:14:49.548 "data_offset": 0, 00:14:49.548 "data_size": 0 00:14:49.548 } 00:14:49.548 ] 00:14:49.548 }' 00:14:49.548 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.548 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:49.807 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:49.808 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.808 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:49.808 [2024-12-06 11:57:47.469679] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:49.808 [2024-12-06 11:57:47.469721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:14:49.808 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.808 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:49.808 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.808 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:49.808 [2024-12-06 11:57:47.481706] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:49.808 [2024-12-06 11:57:47.483494] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:49.808 [2024-12-06 11:57:47.483571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:49.808 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.808 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:49.808 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:49.808 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:49.808 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.808 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.808 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.808 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.808 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:49.808 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.808 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.808 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.808 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.808 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.808 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.808 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:49.808 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.808 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.808 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.808 "name": "Existed_Raid", 00:14:49.808 "uuid": "1471f484-d13e-419f-ab4f-310884f9b901", 00:14:49.808 "strip_size_kb": 0, 00:14:49.808 "state": "configuring", 00:14:49.808 "raid_level": "raid1", 00:14:49.808 "superblock": true, 00:14:49.808 "num_base_bdevs": 2, 00:14:49.808 "num_base_bdevs_discovered": 1, 00:14:49.808 "num_base_bdevs_operational": 2, 00:14:49.808 "base_bdevs_list": [ 00:14:49.808 { 00:14:49.808 "name": "BaseBdev1", 00:14:49.808 "uuid": "d5258b5d-bde1-4db2-b6db-76cccfacb4a8", 00:14:49.808 "is_configured": true, 00:14:49.808 "data_offset": 256, 00:14:49.808 "data_size": 7936 00:14:49.808 }, 00:14:49.808 { 00:14:49.808 "name": "BaseBdev2", 00:14:49.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.808 "is_configured": false, 00:14:49.808 "data_offset": 0, 00:14:49.808 "data_size": 0 00:14:49.808 } 00:14:49.808 ] 00:14:49.808 }' 00:14:49.808 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.808 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:50.377 [2024-12-06 11:57:47.897218] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:50.377 [2024-12-06 11:57:47.897885] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:50.377 [2024-12-06 11:57:47.898028] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:50.377 BaseBdev2 00:14:50.377 [2024-12-06 11:57:47.898883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.377 [2024-12-06 11:57:47.899400] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:50.377 [2024-12-06 11:57:47.899478] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:14:50.377 [2024-12-06 11:57:47.899804] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:50.377 [ 00:14:50.377 { 00:14:50.377 "name": "BaseBdev2", 00:14:50.377 "aliases": [ 00:14:50.377 "cd347d25-5dcf-43ad-86ec-7179ce0af2a2" 00:14:50.377 ], 00:14:50.377 "product_name": "Malloc disk", 00:14:50.377 "block_size": 4096, 00:14:50.377 "num_blocks": 8192, 00:14:50.377 "uuid": "cd347d25-5dcf-43ad-86ec-7179ce0af2a2", 00:14:50.377 "assigned_rate_limits": { 00:14:50.377 "rw_ios_per_sec": 0, 00:14:50.377 "rw_mbytes_per_sec": 0, 00:14:50.377 "r_mbytes_per_sec": 0, 00:14:50.377 "w_mbytes_per_sec": 0 00:14:50.377 }, 00:14:50.377 "claimed": true, 00:14:50.377 "claim_type": "exclusive_write", 00:14:50.377 "zoned": false, 00:14:50.377 "supported_io_types": { 00:14:50.377 "read": true, 00:14:50.377 "write": true, 00:14:50.377 "unmap": true, 00:14:50.377 "flush": true, 00:14:50.377 "reset": true, 00:14:50.377 "nvme_admin": false, 00:14:50.377 "nvme_io": false, 00:14:50.377 "nvme_io_md": false, 00:14:50.377 "write_zeroes": true, 00:14:50.377 "zcopy": true, 00:14:50.377 "get_zone_info": false, 00:14:50.377 "zone_management": false, 00:14:50.377 "zone_append": false, 00:14:50.377 "compare": false, 00:14:50.377 "compare_and_write": false, 00:14:50.377 "abort": true, 00:14:50.377 "seek_hole": false, 00:14:50.377 "seek_data": false, 00:14:50.377 "copy": true, 00:14:50.377 "nvme_iov_md": false 00:14:50.377 }, 00:14:50.377 "memory_domains": [ 00:14:50.377 { 00:14:50.377 "dma_device_id": "system", 00:14:50.377 "dma_device_type": 1 00:14:50.377 }, 00:14:50.377 { 00:14:50.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.377 "dma_device_type": 2 00:14:50.377 } 00:14:50.377 ], 00:14:50.377 "driver_specific": {} 00:14:50.377 } 00:14:50.377 ] 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.377 "name": "Existed_Raid", 00:14:50.377 "uuid": "1471f484-d13e-419f-ab4f-310884f9b901", 00:14:50.377 "strip_size_kb": 0, 00:14:50.377 "state": "online", 00:14:50.377 "raid_level": "raid1", 00:14:50.377 "superblock": true, 00:14:50.377 "num_base_bdevs": 2, 00:14:50.377 "num_base_bdevs_discovered": 2, 00:14:50.377 "num_base_bdevs_operational": 2, 00:14:50.377 "base_bdevs_list": [ 00:14:50.377 { 00:14:50.377 "name": "BaseBdev1", 00:14:50.377 "uuid": "d5258b5d-bde1-4db2-b6db-76cccfacb4a8", 00:14:50.377 "is_configured": true, 00:14:50.377 "data_offset": 256, 00:14:50.377 "data_size": 7936 00:14:50.377 }, 00:14:50.377 { 00:14:50.377 "name": "BaseBdev2", 00:14:50.377 "uuid": "cd347d25-5dcf-43ad-86ec-7179ce0af2a2", 00:14:50.377 "is_configured": true, 00:14:50.377 "data_offset": 256, 00:14:50.377 "data_size": 7936 00:14:50.377 } 00:14:50.377 ] 00:14:50.377 }' 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.377 11:57:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:50.948 [2024-12-06 11:57:48.412571] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:50.948 "name": "Existed_Raid", 00:14:50.948 "aliases": [ 00:14:50.948 "1471f484-d13e-419f-ab4f-310884f9b901" 00:14:50.948 ], 00:14:50.948 "product_name": "Raid Volume", 00:14:50.948 "block_size": 4096, 00:14:50.948 "num_blocks": 7936, 00:14:50.948 "uuid": "1471f484-d13e-419f-ab4f-310884f9b901", 00:14:50.948 "assigned_rate_limits": { 00:14:50.948 "rw_ios_per_sec": 0, 00:14:50.948 "rw_mbytes_per_sec": 0, 00:14:50.948 "r_mbytes_per_sec": 0, 00:14:50.948 "w_mbytes_per_sec": 0 00:14:50.948 }, 00:14:50.948 "claimed": false, 00:14:50.948 "zoned": false, 00:14:50.948 "supported_io_types": { 00:14:50.948 "read": true, 00:14:50.948 "write": true, 00:14:50.948 "unmap": false, 00:14:50.948 "flush": false, 00:14:50.948 "reset": true, 00:14:50.948 "nvme_admin": false, 00:14:50.948 "nvme_io": false, 00:14:50.948 "nvme_io_md": false, 00:14:50.948 "write_zeroes": true, 00:14:50.948 "zcopy": false, 00:14:50.948 "get_zone_info": false, 00:14:50.948 "zone_management": false, 00:14:50.948 "zone_append": false, 00:14:50.948 "compare": false, 00:14:50.948 "compare_and_write": false, 00:14:50.948 "abort": false, 00:14:50.948 "seek_hole": false, 00:14:50.948 "seek_data": false, 00:14:50.948 "copy": false, 00:14:50.948 "nvme_iov_md": false 00:14:50.948 }, 00:14:50.948 "memory_domains": [ 00:14:50.948 { 00:14:50.948 "dma_device_id": "system", 00:14:50.948 "dma_device_type": 1 00:14:50.948 }, 00:14:50.948 { 00:14:50.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.948 "dma_device_type": 2 00:14:50.948 }, 00:14:50.948 { 00:14:50.948 "dma_device_id": "system", 00:14:50.948 "dma_device_type": 1 00:14:50.948 }, 00:14:50.948 { 00:14:50.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.948 "dma_device_type": 2 00:14:50.948 } 00:14:50.948 ], 00:14:50.948 "driver_specific": { 00:14:50.948 "raid": { 00:14:50.948 "uuid": "1471f484-d13e-419f-ab4f-310884f9b901", 00:14:50.948 "strip_size_kb": 0, 00:14:50.948 "state": "online", 00:14:50.948 "raid_level": "raid1", 00:14:50.948 "superblock": true, 00:14:50.948 "num_base_bdevs": 2, 00:14:50.948 "num_base_bdevs_discovered": 2, 00:14:50.948 "num_base_bdevs_operational": 2, 00:14:50.948 "base_bdevs_list": [ 00:14:50.948 { 00:14:50.948 "name": "BaseBdev1", 00:14:50.948 "uuid": "d5258b5d-bde1-4db2-b6db-76cccfacb4a8", 00:14:50.948 "is_configured": true, 00:14:50.948 "data_offset": 256, 00:14:50.948 "data_size": 7936 00:14:50.948 }, 00:14:50.948 { 00:14:50.948 "name": "BaseBdev2", 00:14:50.948 "uuid": "cd347d25-5dcf-43ad-86ec-7179ce0af2a2", 00:14:50.948 "is_configured": true, 00:14:50.948 "data_offset": 256, 00:14:50.948 "data_size": 7936 00:14:50.948 } 00:14:50.948 ] 00:14:50.948 } 00:14:50.948 } 00:14:50.948 }' 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:50.948 BaseBdev2' 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:50.948 [2024-12-06 11:57:48.643967] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:50.948 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.207 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.207 "name": "Existed_Raid", 00:14:51.207 "uuid": "1471f484-d13e-419f-ab4f-310884f9b901", 00:14:51.207 "strip_size_kb": 0, 00:14:51.207 "state": "online", 00:14:51.207 "raid_level": "raid1", 00:14:51.207 "superblock": true, 00:14:51.207 "num_base_bdevs": 2, 00:14:51.207 "num_base_bdevs_discovered": 1, 00:14:51.207 "num_base_bdevs_operational": 1, 00:14:51.207 "base_bdevs_list": [ 00:14:51.207 { 00:14:51.207 "name": null, 00:14:51.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.208 "is_configured": false, 00:14:51.208 "data_offset": 0, 00:14:51.208 "data_size": 7936 00:14:51.208 }, 00:14:51.208 { 00:14:51.208 "name": "BaseBdev2", 00:14:51.208 "uuid": "cd347d25-5dcf-43ad-86ec-7179ce0af2a2", 00:14:51.208 "is_configured": true, 00:14:51.208 "data_offset": 256, 00:14:51.208 "data_size": 7936 00:14:51.208 } 00:14:51.208 ] 00:14:51.208 }' 00:14:51.208 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.208 11:57:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:51.467 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:51.467 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:51.467 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:51.467 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.467 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.467 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:51.467 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.467 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:51.467 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:51.467 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:51.467 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.467 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:51.467 [2024-12-06 11:57:49.118392] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:51.467 [2024-12-06 11:57:49.118485] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:51.467 [2024-12-06 11:57:49.129890] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:51.467 [2024-12-06 11:57:49.129942] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:51.467 [2024-12-06 11:57:49.129953] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:14:51.467 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.467 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:51.467 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:51.467 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.467 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:51.467 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.467 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:51.467 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.467 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:51.467 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:51.467 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:14:51.467 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 95903 00:14:51.467 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 95903 ']' 00:14:51.467 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 95903 00:14:51.467 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:14:51.467 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:51.467 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95903 00:14:51.727 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:51.727 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:51.727 killing process with pid 95903 00:14:51.727 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95903' 00:14:51.727 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 95903 00:14:51.727 [2024-12-06 11:57:49.228768] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:51.727 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 95903 00:14:51.727 [2024-12-06 11:57:49.229732] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:51.727 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:14:51.727 00:14:51.727 real 0m3.884s 00:14:51.727 user 0m6.109s 00:14:51.727 sys 0m0.812s 00:14:51.727 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:51.727 11:57:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:51.727 ************************************ 00:14:51.727 END TEST raid_state_function_test_sb_4k 00:14:51.727 ************************************ 00:14:51.986 11:57:49 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:14:51.987 11:57:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:51.987 11:57:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:51.987 11:57:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:51.987 ************************************ 00:14:51.987 START TEST raid_superblock_test_4k 00:14:51.987 ************************************ 00:14:51.987 11:57:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:14:51.987 11:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:14:51.987 11:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:14:51.987 11:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:51.987 11:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:51.987 11:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:51.987 11:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:51.987 11:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:51.987 11:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:51.987 11:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:51.987 11:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:51.987 11:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:51.987 11:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:51.987 11:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:51.987 11:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:14:51.987 11:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:14:51.987 11:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=96143 00:14:51.987 11:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 96143 00:14:51.987 11:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:51.987 11:57:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 96143 ']' 00:14:51.987 11:57:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.987 11:57:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:51.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.987 11:57:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.987 11:57:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:51.987 11:57:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:51.987 [2024-12-06 11:57:49.638872] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:51.987 [2024-12-06 11:57:49.639006] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96143 ] 00:14:52.245 [2024-12-06 11:57:49.783108] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.245 [2024-12-06 11:57:49.826850] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.246 [2024-12-06 11:57:49.869394] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:52.246 [2024-12-06 11:57:49.869440] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:52.815 11:57:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:52.815 11:57:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:14:52.815 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:52.815 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:52.815 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:52.815 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:52.815 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:52.815 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:52.815 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:52.815 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:52.815 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:14:52.815 11:57:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.815 11:57:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:52.815 malloc1 00:14:52.815 11:57:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.815 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:52.815 11:57:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.815 11:57:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:52.815 [2024-12-06 11:57:50.459710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:52.815 [2024-12-06 11:57:50.459792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.815 [2024-12-06 11:57:50.459814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:52.816 [2024-12-06 11:57:50.459827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.816 [2024-12-06 11:57:50.461862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.816 [2024-12-06 11:57:50.461903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:52.816 pt1 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:52.816 malloc2 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:52.816 [2024-12-06 11:57:50.505269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:52.816 [2024-12-06 11:57:50.505368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.816 [2024-12-06 11:57:50.505406] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:52.816 [2024-12-06 11:57:50.505431] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.816 [2024-12-06 11:57:50.510180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.816 [2024-12-06 11:57:50.510280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:52.816 pt2 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:52.816 [2024-12-06 11:57:50.518558] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:52.816 [2024-12-06 11:57:50.521474] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:52.816 [2024-12-06 11:57:50.521690] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:52.816 [2024-12-06 11:57:50.521714] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:52.816 [2024-12-06 11:57:50.522119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:14:52.816 [2024-12-06 11:57:50.522347] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:52.816 [2024-12-06 11:57:50.522384] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:52.816 [2024-12-06 11:57:50.522627] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:52.816 11:57:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.076 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.076 "name": "raid_bdev1", 00:14:53.076 "uuid": "151de912-9bbc-433d-b433-fd25ae38f60b", 00:14:53.076 "strip_size_kb": 0, 00:14:53.076 "state": "online", 00:14:53.076 "raid_level": "raid1", 00:14:53.076 "superblock": true, 00:14:53.076 "num_base_bdevs": 2, 00:14:53.076 "num_base_bdevs_discovered": 2, 00:14:53.076 "num_base_bdevs_operational": 2, 00:14:53.076 "base_bdevs_list": [ 00:14:53.076 { 00:14:53.076 "name": "pt1", 00:14:53.076 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:53.076 "is_configured": true, 00:14:53.076 "data_offset": 256, 00:14:53.076 "data_size": 7936 00:14:53.076 }, 00:14:53.076 { 00:14:53.076 "name": "pt2", 00:14:53.076 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:53.076 "is_configured": true, 00:14:53.076 "data_offset": 256, 00:14:53.076 "data_size": 7936 00:14:53.076 } 00:14:53.076 ] 00:14:53.076 }' 00:14:53.076 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.076 11:57:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:53.335 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:53.335 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:53.335 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:53.335 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:53.335 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:14:53.335 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:53.335 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:53.335 11:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:53.335 11:57:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.335 11:57:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:53.335 [2024-12-06 11:57:50.970039] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.335 11:57:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.335 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:53.335 "name": "raid_bdev1", 00:14:53.335 "aliases": [ 00:14:53.335 "151de912-9bbc-433d-b433-fd25ae38f60b" 00:14:53.335 ], 00:14:53.335 "product_name": "Raid Volume", 00:14:53.335 "block_size": 4096, 00:14:53.335 "num_blocks": 7936, 00:14:53.335 "uuid": "151de912-9bbc-433d-b433-fd25ae38f60b", 00:14:53.335 "assigned_rate_limits": { 00:14:53.335 "rw_ios_per_sec": 0, 00:14:53.335 "rw_mbytes_per_sec": 0, 00:14:53.335 "r_mbytes_per_sec": 0, 00:14:53.335 "w_mbytes_per_sec": 0 00:14:53.335 }, 00:14:53.335 "claimed": false, 00:14:53.335 "zoned": false, 00:14:53.335 "supported_io_types": { 00:14:53.335 "read": true, 00:14:53.335 "write": true, 00:14:53.335 "unmap": false, 00:14:53.335 "flush": false, 00:14:53.335 "reset": true, 00:14:53.335 "nvme_admin": false, 00:14:53.335 "nvme_io": false, 00:14:53.335 "nvme_io_md": false, 00:14:53.335 "write_zeroes": true, 00:14:53.335 "zcopy": false, 00:14:53.335 "get_zone_info": false, 00:14:53.335 "zone_management": false, 00:14:53.335 "zone_append": false, 00:14:53.335 "compare": false, 00:14:53.335 "compare_and_write": false, 00:14:53.335 "abort": false, 00:14:53.335 "seek_hole": false, 00:14:53.335 "seek_data": false, 00:14:53.335 "copy": false, 00:14:53.335 "nvme_iov_md": false 00:14:53.335 }, 00:14:53.335 "memory_domains": [ 00:14:53.335 { 00:14:53.335 "dma_device_id": "system", 00:14:53.335 "dma_device_type": 1 00:14:53.335 }, 00:14:53.335 { 00:14:53.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.335 "dma_device_type": 2 00:14:53.335 }, 00:14:53.335 { 00:14:53.335 "dma_device_id": "system", 00:14:53.335 "dma_device_type": 1 00:14:53.335 }, 00:14:53.335 { 00:14:53.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.335 "dma_device_type": 2 00:14:53.335 } 00:14:53.335 ], 00:14:53.335 "driver_specific": { 00:14:53.335 "raid": { 00:14:53.335 "uuid": "151de912-9bbc-433d-b433-fd25ae38f60b", 00:14:53.335 "strip_size_kb": 0, 00:14:53.335 "state": "online", 00:14:53.335 "raid_level": "raid1", 00:14:53.335 "superblock": true, 00:14:53.335 "num_base_bdevs": 2, 00:14:53.335 "num_base_bdevs_discovered": 2, 00:14:53.335 "num_base_bdevs_operational": 2, 00:14:53.335 "base_bdevs_list": [ 00:14:53.335 { 00:14:53.335 "name": "pt1", 00:14:53.335 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:53.335 "is_configured": true, 00:14:53.335 "data_offset": 256, 00:14:53.335 "data_size": 7936 00:14:53.335 }, 00:14:53.335 { 00:14:53.335 "name": "pt2", 00:14:53.335 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:53.335 "is_configured": true, 00:14:53.335 "data_offset": 256, 00:14:53.335 "data_size": 7936 00:14:53.335 } 00:14:53.335 ] 00:14:53.335 } 00:14:53.335 } 00:14:53.335 }' 00:14:53.335 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:53.335 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:53.335 pt2' 00:14:53.335 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.335 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:14:53.335 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.335 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:53.335 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.335 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:53.335 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.595 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.595 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:53.595 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:53.595 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.595 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.595 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:53.595 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.595 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:53.595 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.595 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:53.595 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:53.595 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:53.595 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:53.595 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.595 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:53.595 [2024-12-06 11:57:51.169600] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.595 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.595 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=151de912-9bbc-433d-b433-fd25ae38f60b 00:14:53.595 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 151de912-9bbc-433d-b433-fd25ae38f60b ']' 00:14:53.595 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:53.595 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.595 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:53.595 [2024-12-06 11:57:51.213348] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:53.595 [2024-12-06 11:57:51.213371] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:53.595 [2024-12-06 11:57:51.213435] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:53.595 [2024-12-06 11:57:51.213492] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:53.595 [2024-12-06 11:57:51.213506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:53.595 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.596 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:53.855 [2024-12-06 11:57:51.353101] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:53.856 [2024-12-06 11:57:51.354887] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:53.856 [2024-12-06 11:57:51.354960] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:53.856 [2024-12-06 11:57:51.355001] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:53.856 [2024-12-06 11:57:51.355015] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:53.856 [2024-12-06 11:57:51.355031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:14:53.856 request: 00:14:53.856 { 00:14:53.856 "name": "raid_bdev1", 00:14:53.856 "raid_level": "raid1", 00:14:53.856 "base_bdevs": [ 00:14:53.856 "malloc1", 00:14:53.856 "malloc2" 00:14:53.856 ], 00:14:53.856 "superblock": false, 00:14:53.856 "method": "bdev_raid_create", 00:14:53.856 "req_id": 1 00:14:53.856 } 00:14:53.856 Got JSON-RPC error response 00:14:53.856 response: 00:14:53.856 { 00:14:53.856 "code": -17, 00:14:53.856 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:53.856 } 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:53.856 [2024-12-06 11:57:51.416968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:53.856 [2024-12-06 11:57:51.417017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.856 [2024-12-06 11:57:51.417038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:53.856 [2024-12-06 11:57:51.417046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.856 [2024-12-06 11:57:51.419056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.856 [2024-12-06 11:57:51.419090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:53.856 [2024-12-06 11:57:51.419161] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:53.856 [2024-12-06 11:57:51.419195] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:53.856 pt1 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.856 "name": "raid_bdev1", 00:14:53.856 "uuid": "151de912-9bbc-433d-b433-fd25ae38f60b", 00:14:53.856 "strip_size_kb": 0, 00:14:53.856 "state": "configuring", 00:14:53.856 "raid_level": "raid1", 00:14:53.856 "superblock": true, 00:14:53.856 "num_base_bdevs": 2, 00:14:53.856 "num_base_bdevs_discovered": 1, 00:14:53.856 "num_base_bdevs_operational": 2, 00:14:53.856 "base_bdevs_list": [ 00:14:53.856 { 00:14:53.856 "name": "pt1", 00:14:53.856 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:53.856 "is_configured": true, 00:14:53.856 "data_offset": 256, 00:14:53.856 "data_size": 7936 00:14:53.856 }, 00:14:53.856 { 00:14:53.856 "name": null, 00:14:53.856 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:53.856 "is_configured": false, 00:14:53.856 "data_offset": 256, 00:14:53.856 "data_size": 7936 00:14:53.856 } 00:14:53.856 ] 00:14:53.856 }' 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.856 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.115 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:14:54.115 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:54.116 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:54.116 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:54.116 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.116 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.376 [2024-12-06 11:57:51.872187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:54.376 [2024-12-06 11:57:51.872244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.376 [2024-12-06 11:57:51.872261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:54.376 [2024-12-06 11:57:51.872269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.376 [2024-12-06 11:57:51.872596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.376 [2024-12-06 11:57:51.872619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:54.376 [2024-12-06 11:57:51.872673] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:54.376 [2024-12-06 11:57:51.872690] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:54.376 [2024-12-06 11:57:51.872767] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:54.376 [2024-12-06 11:57:51.872779] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:54.376 [2024-12-06 11:57:51.873003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:14:54.376 [2024-12-06 11:57:51.873110] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:54.376 [2024-12-06 11:57:51.873127] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:14:54.376 [2024-12-06 11:57:51.873215] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.376 pt2 00:14:54.376 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.376 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:54.376 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:54.376 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:54.376 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.376 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.376 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.376 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.376 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:54.376 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.376 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.376 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.376 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.376 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.376 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.376 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.376 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.376 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.376 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.376 "name": "raid_bdev1", 00:14:54.376 "uuid": "151de912-9bbc-433d-b433-fd25ae38f60b", 00:14:54.376 "strip_size_kb": 0, 00:14:54.376 "state": "online", 00:14:54.376 "raid_level": "raid1", 00:14:54.376 "superblock": true, 00:14:54.376 "num_base_bdevs": 2, 00:14:54.376 "num_base_bdevs_discovered": 2, 00:14:54.376 "num_base_bdevs_operational": 2, 00:14:54.376 "base_bdevs_list": [ 00:14:54.376 { 00:14:54.376 "name": "pt1", 00:14:54.376 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:54.376 "is_configured": true, 00:14:54.376 "data_offset": 256, 00:14:54.376 "data_size": 7936 00:14:54.376 }, 00:14:54.376 { 00:14:54.376 "name": "pt2", 00:14:54.376 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:54.376 "is_configured": true, 00:14:54.376 "data_offset": 256, 00:14:54.376 "data_size": 7936 00:14:54.376 } 00:14:54.376 ] 00:14:54.376 }' 00:14:54.376 11:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.376 11:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.637 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:54.637 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:54.637 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:54.637 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:54.637 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:14:54.637 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:54.637 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:54.637 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:54.637 11:57:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.637 11:57:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.637 [2024-12-06 11:57:52.315653] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:54.637 11:57:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.637 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:54.637 "name": "raid_bdev1", 00:14:54.637 "aliases": [ 00:14:54.637 "151de912-9bbc-433d-b433-fd25ae38f60b" 00:14:54.637 ], 00:14:54.637 "product_name": "Raid Volume", 00:14:54.637 "block_size": 4096, 00:14:54.637 "num_blocks": 7936, 00:14:54.637 "uuid": "151de912-9bbc-433d-b433-fd25ae38f60b", 00:14:54.637 "assigned_rate_limits": { 00:14:54.637 "rw_ios_per_sec": 0, 00:14:54.637 "rw_mbytes_per_sec": 0, 00:14:54.637 "r_mbytes_per_sec": 0, 00:14:54.637 "w_mbytes_per_sec": 0 00:14:54.637 }, 00:14:54.637 "claimed": false, 00:14:54.637 "zoned": false, 00:14:54.637 "supported_io_types": { 00:14:54.637 "read": true, 00:14:54.637 "write": true, 00:14:54.637 "unmap": false, 00:14:54.637 "flush": false, 00:14:54.637 "reset": true, 00:14:54.637 "nvme_admin": false, 00:14:54.637 "nvme_io": false, 00:14:54.637 "nvme_io_md": false, 00:14:54.637 "write_zeroes": true, 00:14:54.637 "zcopy": false, 00:14:54.637 "get_zone_info": false, 00:14:54.637 "zone_management": false, 00:14:54.637 "zone_append": false, 00:14:54.637 "compare": false, 00:14:54.637 "compare_and_write": false, 00:14:54.637 "abort": false, 00:14:54.637 "seek_hole": false, 00:14:54.637 "seek_data": false, 00:14:54.637 "copy": false, 00:14:54.637 "nvme_iov_md": false 00:14:54.637 }, 00:14:54.637 "memory_domains": [ 00:14:54.637 { 00:14:54.637 "dma_device_id": "system", 00:14:54.637 "dma_device_type": 1 00:14:54.637 }, 00:14:54.637 { 00:14:54.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.637 "dma_device_type": 2 00:14:54.637 }, 00:14:54.637 { 00:14:54.637 "dma_device_id": "system", 00:14:54.637 "dma_device_type": 1 00:14:54.637 }, 00:14:54.637 { 00:14:54.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.637 "dma_device_type": 2 00:14:54.637 } 00:14:54.637 ], 00:14:54.637 "driver_specific": { 00:14:54.637 "raid": { 00:14:54.637 "uuid": "151de912-9bbc-433d-b433-fd25ae38f60b", 00:14:54.637 "strip_size_kb": 0, 00:14:54.637 "state": "online", 00:14:54.637 "raid_level": "raid1", 00:14:54.637 "superblock": true, 00:14:54.637 "num_base_bdevs": 2, 00:14:54.637 "num_base_bdevs_discovered": 2, 00:14:54.637 "num_base_bdevs_operational": 2, 00:14:54.637 "base_bdevs_list": [ 00:14:54.637 { 00:14:54.637 "name": "pt1", 00:14:54.637 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:54.637 "is_configured": true, 00:14:54.637 "data_offset": 256, 00:14:54.637 "data_size": 7936 00:14:54.637 }, 00:14:54.637 { 00:14:54.637 "name": "pt2", 00:14:54.637 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:54.637 "is_configured": true, 00:14:54.637 "data_offset": 256, 00:14:54.637 "data_size": 7936 00:14:54.637 } 00:14:54.637 ] 00:14:54.637 } 00:14:54.637 } 00:14:54.637 }' 00:14:54.637 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:54.637 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:54.637 pt2' 00:14:54.637 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:54.899 [2024-12-06 11:57:52.531295] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 151de912-9bbc-433d-b433-fd25ae38f60b '!=' 151de912-9bbc-433d-b433-fd25ae38f60b ']' 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.899 [2024-12-06 11:57:52.575034] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.899 "name": "raid_bdev1", 00:14:54.899 "uuid": "151de912-9bbc-433d-b433-fd25ae38f60b", 00:14:54.899 "strip_size_kb": 0, 00:14:54.899 "state": "online", 00:14:54.899 "raid_level": "raid1", 00:14:54.899 "superblock": true, 00:14:54.899 "num_base_bdevs": 2, 00:14:54.899 "num_base_bdevs_discovered": 1, 00:14:54.899 "num_base_bdevs_operational": 1, 00:14:54.899 "base_bdevs_list": [ 00:14:54.899 { 00:14:54.899 "name": null, 00:14:54.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.899 "is_configured": false, 00:14:54.899 "data_offset": 0, 00:14:54.899 "data_size": 7936 00:14:54.899 }, 00:14:54.899 { 00:14:54.899 "name": "pt2", 00:14:54.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:54.899 "is_configured": true, 00:14:54.899 "data_offset": 256, 00:14:54.899 "data_size": 7936 00:14:54.899 } 00:14:54.899 ] 00:14:54.899 }' 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.899 11:57:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.470 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:55.470 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.470 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.470 [2024-12-06 11:57:53.006257] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:55.470 [2024-12-06 11:57:53.006282] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:55.470 [2024-12-06 11:57:53.006340] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:55.471 [2024-12-06 11:57:53.006387] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:55.471 [2024-12-06 11:57:53.006399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.471 [2024-12-06 11:57:53.078112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:55.471 [2024-12-06 11:57:53.078162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.471 [2024-12-06 11:57:53.078179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:14:55.471 [2024-12-06 11:57:53.078188] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.471 [2024-12-06 11:57:53.080218] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.471 pt2 00:14:55.471 [2024-12-06 11:57:53.080314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:55.471 [2024-12-06 11:57:53.080386] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:55.471 [2024-12-06 11:57:53.080415] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:55.471 [2024-12-06 11:57:53.080486] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:14:55.471 [2024-12-06 11:57:53.080494] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:55.471 [2024-12-06 11:57:53.080698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:55.471 [2024-12-06 11:57:53.080796] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:14:55.471 [2024-12-06 11:57:53.080806] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:14:55.471 [2024-12-06 11:57:53.080894] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.471 "name": "raid_bdev1", 00:14:55.471 "uuid": "151de912-9bbc-433d-b433-fd25ae38f60b", 00:14:55.471 "strip_size_kb": 0, 00:14:55.471 "state": "online", 00:14:55.471 "raid_level": "raid1", 00:14:55.471 "superblock": true, 00:14:55.471 "num_base_bdevs": 2, 00:14:55.471 "num_base_bdevs_discovered": 1, 00:14:55.471 "num_base_bdevs_operational": 1, 00:14:55.471 "base_bdevs_list": [ 00:14:55.471 { 00:14:55.471 "name": null, 00:14:55.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.471 "is_configured": false, 00:14:55.471 "data_offset": 256, 00:14:55.471 "data_size": 7936 00:14:55.471 }, 00:14:55.471 { 00:14:55.471 "name": "pt2", 00:14:55.471 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:55.471 "is_configured": true, 00:14:55.471 "data_offset": 256, 00:14:55.471 "data_size": 7936 00:14:55.471 } 00:14:55.471 ] 00:14:55.471 }' 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.471 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.042 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:56.042 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.042 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.042 [2024-12-06 11:57:53.517349] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:56.042 [2024-12-06 11:57:53.517411] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:56.042 [2024-12-06 11:57:53.517474] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:56.042 [2024-12-06 11:57:53.517523] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:56.042 [2024-12-06 11:57:53.517555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:14:56.042 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.042 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:56.042 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.042 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.042 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.042 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.042 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:56.042 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:56.042 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:14:56.042 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:56.042 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.042 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.042 [2024-12-06 11:57:53.561309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:56.042 [2024-12-06 11:57:53.561391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.042 [2024-12-06 11:57:53.561419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:14:56.042 [2024-12-06 11:57:53.561448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.042 [2024-12-06 11:57:53.563425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.042 [2024-12-06 11:57:53.563494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:56.042 [2024-12-06 11:57:53.563568] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:56.042 [2024-12-06 11:57:53.563633] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:56.042 [2024-12-06 11:57:53.563742] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:56.042 [2024-12-06 11:57:53.563791] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:56.042 [2024-12-06 11:57:53.563885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:14:56.042 [2024-12-06 11:57:53.563958] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:56.042 [2024-12-06 11:57:53.564052] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:14:56.042 [2024-12-06 11:57:53.564090] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:56.042 [2024-12-06 11:57:53.564309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:14:56.042 [2024-12-06 11:57:53.564460] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:14:56.042 [2024-12-06 11:57:53.564500] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:14:56.042 [2024-12-06 11:57:53.564633] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.042 pt1 00:14:56.042 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.042 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:14:56.042 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:56.042 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.042 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.042 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.042 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.043 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:56.043 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.043 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.043 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.043 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.043 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.043 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.043 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.043 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.043 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.043 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.043 "name": "raid_bdev1", 00:14:56.043 "uuid": "151de912-9bbc-433d-b433-fd25ae38f60b", 00:14:56.043 "strip_size_kb": 0, 00:14:56.043 "state": "online", 00:14:56.043 "raid_level": "raid1", 00:14:56.043 "superblock": true, 00:14:56.043 "num_base_bdevs": 2, 00:14:56.043 "num_base_bdevs_discovered": 1, 00:14:56.043 "num_base_bdevs_operational": 1, 00:14:56.043 "base_bdevs_list": [ 00:14:56.043 { 00:14:56.043 "name": null, 00:14:56.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.043 "is_configured": false, 00:14:56.043 "data_offset": 256, 00:14:56.043 "data_size": 7936 00:14:56.043 }, 00:14:56.043 { 00:14:56.043 "name": "pt2", 00:14:56.043 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:56.043 "is_configured": true, 00:14:56.043 "data_offset": 256, 00:14:56.043 "data_size": 7936 00:14:56.043 } 00:14:56.043 ] 00:14:56.043 }' 00:14:56.043 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.043 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.303 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:56.303 11:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:56.303 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.303 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.303 11:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.303 11:57:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:56.303 11:57:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:56.303 11:57:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:56.303 11:57:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.303 11:57:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.303 [2024-12-06 11:57:54.020678] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:56.303 11:57:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.563 11:57:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 151de912-9bbc-433d-b433-fd25ae38f60b '!=' 151de912-9bbc-433d-b433-fd25ae38f60b ']' 00:14:56.563 11:57:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 96143 00:14:56.563 11:57:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 96143 ']' 00:14:56.564 11:57:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 96143 00:14:56.564 11:57:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:14:56.564 11:57:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:56.564 11:57:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96143 00:14:56.564 11:57:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:56.564 11:57:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:56.564 11:57:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96143' 00:14:56.564 killing process with pid 96143 00:14:56.564 11:57:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 96143 00:14:56.564 [2024-12-06 11:57:54.094577] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:56.564 [2024-12-06 11:57:54.094679] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:56.564 [2024-12-06 11:57:54.094743] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:56.564 [2024-12-06 11:57:54.094785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:14:56.564 11:57:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 96143 00:14:56.564 [2024-12-06 11:57:54.117872] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:56.824 11:57:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:14:56.824 00:14:56.824 real 0m4.813s 00:14:56.825 user 0m7.789s 00:14:56.825 sys 0m1.096s 00:14:56.825 11:57:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:56.825 ************************************ 00:14:56.825 END TEST raid_superblock_test_4k 00:14:56.825 ************************************ 00:14:56.825 11:57:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.825 11:57:54 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:14:56.825 11:57:54 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:14:56.825 11:57:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:56.825 11:57:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:56.825 11:57:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:56.825 ************************************ 00:14:56.825 START TEST raid_rebuild_test_sb_4k 00:14:56.825 ************************************ 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=96456 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 96456 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 96456 ']' 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:56.825 11:57:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.825 [2024-12-06 11:57:54.537369] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:56.825 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:56.825 Zero copy mechanism will not be used. 00:14:56.825 [2024-12-06 11:57:54.537594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96456 ] 00:14:57.086 [2024-12-06 11:57:54.682914] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.086 [2024-12-06 11:57:54.727127] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.086 [2024-12-06 11:57:54.769365] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:57.086 [2024-12-06 11:57:54.769397] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:57.655 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:57.655 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:14:57.655 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:57.655 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:14:57.655 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.655 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.655 BaseBdev1_malloc 00:14:57.655 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.655 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:57.655 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.655 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.655 [2024-12-06 11:57:55.371322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:57.655 [2024-12-06 11:57:55.371401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.655 [2024-12-06 11:57:55.371428] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:57.655 [2024-12-06 11:57:55.371443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.655 [2024-12-06 11:57:55.373470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.655 [2024-12-06 11:57:55.373552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:57.655 BaseBdev1 00:14:57.655 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.655 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:57.655 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:14:57.655 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.655 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.655 BaseBdev2_malloc 00:14:57.655 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.655 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:57.655 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.655 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.915 [2024-12-06 11:57:55.415011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:57.915 [2024-12-06 11:57:55.415103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.915 [2024-12-06 11:57:55.415143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:57.915 [2024-12-06 11:57:55.415164] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.915 [2024-12-06 11:57:55.419705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.915 [2024-12-06 11:57:55.419773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:57.915 BaseBdev2 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.915 spare_malloc 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.915 spare_delay 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.915 [2024-12-06 11:57:55.457819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:57.915 [2024-12-06 11:57:55.457867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.915 [2024-12-06 11:57:55.457904] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:57.915 [2024-12-06 11:57:55.457913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.915 [2024-12-06 11:57:55.459956] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.915 [2024-12-06 11:57:55.460062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:57.915 spare 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.915 [2024-12-06 11:57:55.469849] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:57.915 [2024-12-06 11:57:55.471660] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:57.915 [2024-12-06 11:57:55.471811] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:57.915 [2024-12-06 11:57:55.471823] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:57.915 [2024-12-06 11:57:55.472060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:14:57.915 [2024-12-06 11:57:55.472203] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:57.915 [2024-12-06 11:57:55.472215] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:57.915 [2024-12-06 11:57:55.472344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.915 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.916 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.916 "name": "raid_bdev1", 00:14:57.916 "uuid": "985be003-fcb5-4ad4-836e-d863a5896bbe", 00:14:57.916 "strip_size_kb": 0, 00:14:57.916 "state": "online", 00:14:57.916 "raid_level": "raid1", 00:14:57.916 "superblock": true, 00:14:57.916 "num_base_bdevs": 2, 00:14:57.916 "num_base_bdevs_discovered": 2, 00:14:57.916 "num_base_bdevs_operational": 2, 00:14:57.916 "base_bdevs_list": [ 00:14:57.916 { 00:14:57.916 "name": "BaseBdev1", 00:14:57.916 "uuid": "8d90564a-e0c6-5972-9089-b995c6568145", 00:14:57.916 "is_configured": true, 00:14:57.916 "data_offset": 256, 00:14:57.916 "data_size": 7936 00:14:57.916 }, 00:14:57.916 { 00:14:57.916 "name": "BaseBdev2", 00:14:57.916 "uuid": "80a12f72-75ee-5161-b81e-ccda27f70785", 00:14:57.916 "is_configured": true, 00:14:57.916 "data_offset": 256, 00:14:57.916 "data_size": 7936 00:14:57.916 } 00:14:57.916 ] 00:14:57.916 }' 00:14:57.916 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.916 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:58.175 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:58.175 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.175 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:58.175 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:58.175 [2024-12-06 11:57:55.913278] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:58.175 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.434 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:14:58.434 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:58.434 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.434 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.434 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:58.434 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.434 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:14:58.434 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:58.434 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:58.434 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:58.434 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:58.434 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:58.434 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:58.434 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:58.434 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:58.434 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:58.434 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:14:58.434 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:58.434 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:58.434 11:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:58.693 [2024-12-06 11:57:56.192666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:14:58.693 /dev/nbd0 00:14:58.693 11:57:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:58.693 11:57:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:58.693 11:57:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:58.693 11:57:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:14:58.693 11:57:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:58.693 11:57:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:58.693 11:57:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:58.693 11:57:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:14:58.693 11:57:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:58.693 11:57:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:58.693 11:57:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:58.693 1+0 records in 00:14:58.693 1+0 records out 00:14:58.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039674 s, 10.3 MB/s 00:14:58.693 11:57:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.693 11:57:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:14:58.693 11:57:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.693 11:57:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:58.693 11:57:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:14:58.693 11:57:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:58.693 11:57:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:58.693 11:57:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:58.693 11:57:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:58.693 11:57:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:14:59.259 7936+0 records in 00:14:59.259 7936+0 records out 00:14:59.259 32505856 bytes (33 MB, 31 MiB) copied, 0.562878 s, 57.7 MB/s 00:14:59.259 11:57:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:59.259 11:57:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:59.259 11:57:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:59.259 11:57:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:59.259 11:57:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:14:59.259 11:57:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:59.259 11:57:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:59.519 [2024-12-06 11:57:57.034467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.519 [2024-12-06 11:57:57.050956] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.519 "name": "raid_bdev1", 00:14:59.519 "uuid": "985be003-fcb5-4ad4-836e-d863a5896bbe", 00:14:59.519 "strip_size_kb": 0, 00:14:59.519 "state": "online", 00:14:59.519 "raid_level": "raid1", 00:14:59.519 "superblock": true, 00:14:59.519 "num_base_bdevs": 2, 00:14:59.519 "num_base_bdevs_discovered": 1, 00:14:59.519 "num_base_bdevs_operational": 1, 00:14:59.519 "base_bdevs_list": [ 00:14:59.519 { 00:14:59.519 "name": null, 00:14:59.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.519 "is_configured": false, 00:14:59.519 "data_offset": 0, 00:14:59.519 "data_size": 7936 00:14:59.519 }, 00:14:59.519 { 00:14:59.519 "name": "BaseBdev2", 00:14:59.519 "uuid": "80a12f72-75ee-5161-b81e-ccda27f70785", 00:14:59.519 "is_configured": true, 00:14:59.519 "data_offset": 256, 00:14:59.519 "data_size": 7936 00:14:59.519 } 00:14:59.519 ] 00:14:59.519 }' 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.519 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.779 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:59.779 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.779 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.779 [2024-12-06 11:57:57.482256] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:59.779 [2024-12-06 11:57:57.486460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c960 00:14:59.779 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.779 11:57:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:59.779 [2024-12-06 11:57:57.488288] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:01.174 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:01.174 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.174 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:01.174 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:01.174 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.174 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.174 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.174 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.174 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.174 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.174 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.174 "name": "raid_bdev1", 00:15:01.174 "uuid": "985be003-fcb5-4ad4-836e-d863a5896bbe", 00:15:01.174 "strip_size_kb": 0, 00:15:01.174 "state": "online", 00:15:01.174 "raid_level": "raid1", 00:15:01.174 "superblock": true, 00:15:01.174 "num_base_bdevs": 2, 00:15:01.174 "num_base_bdevs_discovered": 2, 00:15:01.174 "num_base_bdevs_operational": 2, 00:15:01.174 "process": { 00:15:01.174 "type": "rebuild", 00:15:01.174 "target": "spare", 00:15:01.174 "progress": { 00:15:01.174 "blocks": 2560, 00:15:01.174 "percent": 32 00:15:01.174 } 00:15:01.174 }, 00:15:01.174 "base_bdevs_list": [ 00:15:01.174 { 00:15:01.174 "name": "spare", 00:15:01.174 "uuid": "d76b98a7-c4f0-53b8-8d72-548de907957c", 00:15:01.174 "is_configured": true, 00:15:01.174 "data_offset": 256, 00:15:01.174 "data_size": 7936 00:15:01.174 }, 00:15:01.174 { 00:15:01.174 "name": "BaseBdev2", 00:15:01.174 "uuid": "80a12f72-75ee-5161-b81e-ccda27f70785", 00:15:01.174 "is_configured": true, 00:15:01.174 "data_offset": 256, 00:15:01.174 "data_size": 7936 00:15:01.174 } 00:15:01.174 ] 00:15:01.174 }' 00:15:01.174 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.174 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:01.174 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.174 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:01.174 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:01.174 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.174 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.174 [2024-12-06 11:57:58.648890] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:01.175 [2024-12-06 11:57:58.692759] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:01.175 [2024-12-06 11:57:58.692814] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.175 [2024-12-06 11:57:58.692848] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:01.175 [2024-12-06 11:57:58.692864] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:01.175 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.175 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:01.175 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.175 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.175 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.175 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.175 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:01.175 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.175 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.175 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.175 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.175 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.175 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.175 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.175 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.175 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.175 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.175 "name": "raid_bdev1", 00:15:01.175 "uuid": "985be003-fcb5-4ad4-836e-d863a5896bbe", 00:15:01.175 "strip_size_kb": 0, 00:15:01.175 "state": "online", 00:15:01.175 "raid_level": "raid1", 00:15:01.175 "superblock": true, 00:15:01.175 "num_base_bdevs": 2, 00:15:01.175 "num_base_bdevs_discovered": 1, 00:15:01.175 "num_base_bdevs_operational": 1, 00:15:01.175 "base_bdevs_list": [ 00:15:01.175 { 00:15:01.175 "name": null, 00:15:01.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.175 "is_configured": false, 00:15:01.175 "data_offset": 0, 00:15:01.175 "data_size": 7936 00:15:01.175 }, 00:15:01.175 { 00:15:01.175 "name": "BaseBdev2", 00:15:01.175 "uuid": "80a12f72-75ee-5161-b81e-ccda27f70785", 00:15:01.175 "is_configured": true, 00:15:01.175 "data_offset": 256, 00:15:01.175 "data_size": 7936 00:15:01.175 } 00:15:01.175 ] 00:15:01.175 }' 00:15:01.175 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.175 11:57:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.434 11:57:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:01.434 11:57:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.435 11:57:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:01.435 11:57:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:01.435 11:57:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.435 11:57:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.435 11:57:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.435 11:57:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.435 11:57:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.435 11:57:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.435 11:57:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.435 "name": "raid_bdev1", 00:15:01.435 "uuid": "985be003-fcb5-4ad4-836e-d863a5896bbe", 00:15:01.435 "strip_size_kb": 0, 00:15:01.435 "state": "online", 00:15:01.435 "raid_level": "raid1", 00:15:01.435 "superblock": true, 00:15:01.435 "num_base_bdevs": 2, 00:15:01.435 "num_base_bdevs_discovered": 1, 00:15:01.435 "num_base_bdevs_operational": 1, 00:15:01.435 "base_bdevs_list": [ 00:15:01.435 { 00:15:01.435 "name": null, 00:15:01.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.435 "is_configured": false, 00:15:01.435 "data_offset": 0, 00:15:01.435 "data_size": 7936 00:15:01.435 }, 00:15:01.435 { 00:15:01.435 "name": "BaseBdev2", 00:15:01.435 "uuid": "80a12f72-75ee-5161-b81e-ccda27f70785", 00:15:01.435 "is_configured": true, 00:15:01.435 "data_offset": 256, 00:15:01.435 "data_size": 7936 00:15:01.435 } 00:15:01.435 ] 00:15:01.435 }' 00:15:01.435 11:57:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.695 11:57:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:01.695 11:57:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.695 11:57:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:01.695 11:57:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:01.695 11:57:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.695 11:57:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.695 [2024-12-06 11:57:59.288214] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:01.695 [2024-12-06 11:57:59.291706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ca30 00:15:01.695 [2024-12-06 11:57:59.293583] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:01.695 11:57:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.695 11:57:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:02.634 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.635 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.635 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.635 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.635 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.635 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.635 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.635 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.635 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.635 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.635 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.635 "name": "raid_bdev1", 00:15:02.635 "uuid": "985be003-fcb5-4ad4-836e-d863a5896bbe", 00:15:02.635 "strip_size_kb": 0, 00:15:02.635 "state": "online", 00:15:02.635 "raid_level": "raid1", 00:15:02.635 "superblock": true, 00:15:02.635 "num_base_bdevs": 2, 00:15:02.635 "num_base_bdevs_discovered": 2, 00:15:02.635 "num_base_bdevs_operational": 2, 00:15:02.635 "process": { 00:15:02.635 "type": "rebuild", 00:15:02.635 "target": "spare", 00:15:02.635 "progress": { 00:15:02.635 "blocks": 2560, 00:15:02.635 "percent": 32 00:15:02.635 } 00:15:02.635 }, 00:15:02.635 "base_bdevs_list": [ 00:15:02.635 { 00:15:02.635 "name": "spare", 00:15:02.635 "uuid": "d76b98a7-c4f0-53b8-8d72-548de907957c", 00:15:02.635 "is_configured": true, 00:15:02.635 "data_offset": 256, 00:15:02.635 "data_size": 7936 00:15:02.635 }, 00:15:02.635 { 00:15:02.635 "name": "BaseBdev2", 00:15:02.635 "uuid": "80a12f72-75ee-5161-b81e-ccda27f70785", 00:15:02.635 "is_configured": true, 00:15:02.635 "data_offset": 256, 00:15:02.635 "data_size": 7936 00:15:02.635 } 00:15:02.635 ] 00:15:02.635 }' 00:15:02.635 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.635 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.635 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.895 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.895 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:02.895 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:02.895 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:02.895 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:02.895 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:02.895 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:02.895 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=554 00:15:02.895 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:02.895 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.895 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.895 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.895 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.895 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.895 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.895 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.895 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.895 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.895 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.895 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.895 "name": "raid_bdev1", 00:15:02.895 "uuid": "985be003-fcb5-4ad4-836e-d863a5896bbe", 00:15:02.895 "strip_size_kb": 0, 00:15:02.895 "state": "online", 00:15:02.895 "raid_level": "raid1", 00:15:02.895 "superblock": true, 00:15:02.895 "num_base_bdevs": 2, 00:15:02.895 "num_base_bdevs_discovered": 2, 00:15:02.895 "num_base_bdevs_operational": 2, 00:15:02.895 "process": { 00:15:02.895 "type": "rebuild", 00:15:02.895 "target": "spare", 00:15:02.895 "progress": { 00:15:02.895 "blocks": 2816, 00:15:02.895 "percent": 35 00:15:02.895 } 00:15:02.895 }, 00:15:02.895 "base_bdevs_list": [ 00:15:02.895 { 00:15:02.895 "name": "spare", 00:15:02.895 "uuid": "d76b98a7-c4f0-53b8-8d72-548de907957c", 00:15:02.895 "is_configured": true, 00:15:02.895 "data_offset": 256, 00:15:02.895 "data_size": 7936 00:15:02.895 }, 00:15:02.895 { 00:15:02.895 "name": "BaseBdev2", 00:15:02.895 "uuid": "80a12f72-75ee-5161-b81e-ccda27f70785", 00:15:02.895 "is_configured": true, 00:15:02.895 "data_offset": 256, 00:15:02.895 "data_size": 7936 00:15:02.895 } 00:15:02.895 ] 00:15:02.895 }' 00:15:02.896 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.896 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.896 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.896 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.896 11:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:03.836 11:58:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:03.836 11:58:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.836 11:58:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.836 11:58:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.836 11:58:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.836 11:58:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.836 11:58:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.836 11:58:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.836 11:58:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.836 11:58:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.097 11:58:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.097 11:58:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.097 "name": "raid_bdev1", 00:15:04.097 "uuid": "985be003-fcb5-4ad4-836e-d863a5896bbe", 00:15:04.097 "strip_size_kb": 0, 00:15:04.097 "state": "online", 00:15:04.097 "raid_level": "raid1", 00:15:04.097 "superblock": true, 00:15:04.097 "num_base_bdevs": 2, 00:15:04.097 "num_base_bdevs_discovered": 2, 00:15:04.097 "num_base_bdevs_operational": 2, 00:15:04.097 "process": { 00:15:04.097 "type": "rebuild", 00:15:04.097 "target": "spare", 00:15:04.097 "progress": { 00:15:04.097 "blocks": 5632, 00:15:04.097 "percent": 70 00:15:04.097 } 00:15:04.097 }, 00:15:04.097 "base_bdevs_list": [ 00:15:04.097 { 00:15:04.097 "name": "spare", 00:15:04.097 "uuid": "d76b98a7-c4f0-53b8-8d72-548de907957c", 00:15:04.097 "is_configured": true, 00:15:04.097 "data_offset": 256, 00:15:04.097 "data_size": 7936 00:15:04.097 }, 00:15:04.097 { 00:15:04.097 "name": "BaseBdev2", 00:15:04.097 "uuid": "80a12f72-75ee-5161-b81e-ccda27f70785", 00:15:04.097 "is_configured": true, 00:15:04.097 "data_offset": 256, 00:15:04.097 "data_size": 7936 00:15:04.097 } 00:15:04.097 ] 00:15:04.097 }' 00:15:04.097 11:58:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.097 11:58:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.097 11:58:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.097 11:58:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.097 11:58:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:04.666 [2024-12-06 11:58:02.403826] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:04.666 [2024-12-06 11:58:02.403967] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:04.666 [2024-12-06 11:58:02.404114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.238 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:05.238 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.238 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.238 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.238 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.238 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.238 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.238 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.238 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.238 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.238 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.238 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.238 "name": "raid_bdev1", 00:15:05.238 "uuid": "985be003-fcb5-4ad4-836e-d863a5896bbe", 00:15:05.238 "strip_size_kb": 0, 00:15:05.238 "state": "online", 00:15:05.238 "raid_level": "raid1", 00:15:05.238 "superblock": true, 00:15:05.238 "num_base_bdevs": 2, 00:15:05.238 "num_base_bdevs_discovered": 2, 00:15:05.238 "num_base_bdevs_operational": 2, 00:15:05.238 "base_bdevs_list": [ 00:15:05.238 { 00:15:05.238 "name": "spare", 00:15:05.238 "uuid": "d76b98a7-c4f0-53b8-8d72-548de907957c", 00:15:05.238 "is_configured": true, 00:15:05.238 "data_offset": 256, 00:15:05.238 "data_size": 7936 00:15:05.238 }, 00:15:05.238 { 00:15:05.238 "name": "BaseBdev2", 00:15:05.238 "uuid": "80a12f72-75ee-5161-b81e-ccda27f70785", 00:15:05.238 "is_configured": true, 00:15:05.238 "data_offset": 256, 00:15:05.238 "data_size": 7936 00:15:05.238 } 00:15:05.238 ] 00:15:05.238 }' 00:15:05.238 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.238 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:05.238 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.238 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:05.238 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:15:05.238 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:05.238 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.238 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:05.238 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:05.238 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.238 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.238 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.238 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.238 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.238 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.238 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.238 "name": "raid_bdev1", 00:15:05.238 "uuid": "985be003-fcb5-4ad4-836e-d863a5896bbe", 00:15:05.238 "strip_size_kb": 0, 00:15:05.238 "state": "online", 00:15:05.238 "raid_level": "raid1", 00:15:05.238 "superblock": true, 00:15:05.238 "num_base_bdevs": 2, 00:15:05.238 "num_base_bdevs_discovered": 2, 00:15:05.238 "num_base_bdevs_operational": 2, 00:15:05.238 "base_bdevs_list": [ 00:15:05.238 { 00:15:05.238 "name": "spare", 00:15:05.238 "uuid": "d76b98a7-c4f0-53b8-8d72-548de907957c", 00:15:05.238 "is_configured": true, 00:15:05.238 "data_offset": 256, 00:15:05.239 "data_size": 7936 00:15:05.239 }, 00:15:05.239 { 00:15:05.239 "name": "BaseBdev2", 00:15:05.239 "uuid": "80a12f72-75ee-5161-b81e-ccda27f70785", 00:15:05.239 "is_configured": true, 00:15:05.239 "data_offset": 256, 00:15:05.239 "data_size": 7936 00:15:05.239 } 00:15:05.239 ] 00:15:05.239 }' 00:15:05.239 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.239 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:05.239 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.239 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:05.239 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:05.239 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.239 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.239 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.239 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.239 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:05.239 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.239 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.239 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.239 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.499 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.499 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.499 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.499 11:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.499 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.499 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.499 "name": "raid_bdev1", 00:15:05.499 "uuid": "985be003-fcb5-4ad4-836e-d863a5896bbe", 00:15:05.499 "strip_size_kb": 0, 00:15:05.499 "state": "online", 00:15:05.499 "raid_level": "raid1", 00:15:05.499 "superblock": true, 00:15:05.499 "num_base_bdevs": 2, 00:15:05.499 "num_base_bdevs_discovered": 2, 00:15:05.499 "num_base_bdevs_operational": 2, 00:15:05.499 "base_bdevs_list": [ 00:15:05.499 { 00:15:05.499 "name": "spare", 00:15:05.499 "uuid": "d76b98a7-c4f0-53b8-8d72-548de907957c", 00:15:05.499 "is_configured": true, 00:15:05.499 "data_offset": 256, 00:15:05.499 "data_size": 7936 00:15:05.499 }, 00:15:05.499 { 00:15:05.499 "name": "BaseBdev2", 00:15:05.499 "uuid": "80a12f72-75ee-5161-b81e-ccda27f70785", 00:15:05.499 "is_configured": true, 00:15:05.499 "data_offset": 256, 00:15:05.499 "data_size": 7936 00:15:05.499 } 00:15:05.499 ] 00:15:05.499 }' 00:15:05.499 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.499 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.759 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:05.760 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.760 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.760 [2024-12-06 11:58:03.342571] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:05.760 [2024-12-06 11:58:03.342598] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:05.760 [2024-12-06 11:58:03.342670] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.760 [2024-12-06 11:58:03.342741] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:05.760 [2024-12-06 11:58:03.342753] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:05.760 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.760 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.760 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:15:05.760 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.760 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.760 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.760 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:05.760 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:05.760 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:05.760 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:05.760 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:05.760 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:05.760 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:05.760 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:05.760 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:05.760 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:05.760 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:05.760 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:05.760 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:06.020 /dev/nbd0 00:15:06.020 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:06.020 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:06.020 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:06.020 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:15:06.020 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:06.020 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:06.020 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:06.020 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:15:06.020 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:06.020 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:06.020 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:06.020 1+0 records in 00:15:06.020 1+0 records out 00:15:06.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000505726 s, 8.1 MB/s 00:15:06.020 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.020 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:15:06.020 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.020 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:06.020 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:15:06.020 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:06.020 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:06.020 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:06.280 /dev/nbd1 00:15:06.280 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:06.280 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:06.280 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:06.280 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:15:06.280 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:06.280 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:06.280 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:06.280 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:15:06.280 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:06.280 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:06.280 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:06.280 1+0 records in 00:15:06.280 1+0 records out 00:15:06.280 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337129 s, 12.1 MB/s 00:15:06.280 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.280 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:15:06.280 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.280 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:06.280 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:15:06.280 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:06.280 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:06.281 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:06.281 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:06.281 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:06.281 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:06.281 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:06.281 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:15:06.281 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:06.281 11:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:06.541 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:06.541 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:06.541 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:06.541 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:06.541 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:06.541 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:06.541 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:06.541 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:06.541 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:06.541 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.802 [2024-12-06 11:58:04.427148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:06.802 [2024-12-06 11:58:04.427203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.802 [2024-12-06 11:58:04.427222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:06.802 [2024-12-06 11:58:04.427244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.802 [2024-12-06 11:58:04.429357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.802 [2024-12-06 11:58:04.429398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:06.802 [2024-12-06 11:58:04.429480] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:06.802 [2024-12-06 11:58:04.429527] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:06.802 [2024-12-06 11:58:04.429632] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:06.802 spare 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.802 [2024-12-06 11:58:04.529528] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:15:06.802 [2024-12-06 11:58:04.529553] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:06.802 [2024-12-06 11:58:04.529786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb1b0 00:15:06.802 [2024-12-06 11:58:04.529939] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:15:06.802 [2024-12-06 11:58:04.529975] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:15:06.802 [2024-12-06 11:58:04.530091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.802 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.062 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.062 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.062 "name": "raid_bdev1", 00:15:07.062 "uuid": "985be003-fcb5-4ad4-836e-d863a5896bbe", 00:15:07.062 "strip_size_kb": 0, 00:15:07.062 "state": "online", 00:15:07.062 "raid_level": "raid1", 00:15:07.062 "superblock": true, 00:15:07.062 "num_base_bdevs": 2, 00:15:07.062 "num_base_bdevs_discovered": 2, 00:15:07.062 "num_base_bdevs_operational": 2, 00:15:07.062 "base_bdevs_list": [ 00:15:07.062 { 00:15:07.062 "name": "spare", 00:15:07.062 "uuid": "d76b98a7-c4f0-53b8-8d72-548de907957c", 00:15:07.062 "is_configured": true, 00:15:07.062 "data_offset": 256, 00:15:07.062 "data_size": 7936 00:15:07.062 }, 00:15:07.062 { 00:15:07.062 "name": "BaseBdev2", 00:15:07.062 "uuid": "80a12f72-75ee-5161-b81e-ccda27f70785", 00:15:07.062 "is_configured": true, 00:15:07.062 "data_offset": 256, 00:15:07.062 "data_size": 7936 00:15:07.062 } 00:15:07.062 ] 00:15:07.062 }' 00:15:07.062 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.062 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.323 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:07.323 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.323 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:07.323 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:07.323 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.323 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.323 11:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.323 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.323 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.323 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.323 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.323 "name": "raid_bdev1", 00:15:07.323 "uuid": "985be003-fcb5-4ad4-836e-d863a5896bbe", 00:15:07.323 "strip_size_kb": 0, 00:15:07.323 "state": "online", 00:15:07.323 "raid_level": "raid1", 00:15:07.323 "superblock": true, 00:15:07.323 "num_base_bdevs": 2, 00:15:07.323 "num_base_bdevs_discovered": 2, 00:15:07.323 "num_base_bdevs_operational": 2, 00:15:07.323 "base_bdevs_list": [ 00:15:07.323 { 00:15:07.323 "name": "spare", 00:15:07.323 "uuid": "d76b98a7-c4f0-53b8-8d72-548de907957c", 00:15:07.323 "is_configured": true, 00:15:07.323 "data_offset": 256, 00:15:07.323 "data_size": 7936 00:15:07.323 }, 00:15:07.323 { 00:15:07.323 "name": "BaseBdev2", 00:15:07.323 "uuid": "80a12f72-75ee-5161-b81e-ccda27f70785", 00:15:07.323 "is_configured": true, 00:15:07.323 "data_offset": 256, 00:15:07.323 "data_size": 7936 00:15:07.323 } 00:15:07.323 ] 00:15:07.323 }' 00:15:07.323 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.582 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:07.582 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.582 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:07.582 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:07.582 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.582 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.582 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.582 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.582 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:07.582 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:07.582 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.582 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.582 [2024-12-06 11:58:05.153929] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:07.583 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.583 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:07.583 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.583 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.583 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.583 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.583 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:07.583 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.583 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.583 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.583 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.583 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.583 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.583 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.583 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.583 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.583 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.583 "name": "raid_bdev1", 00:15:07.583 "uuid": "985be003-fcb5-4ad4-836e-d863a5896bbe", 00:15:07.583 "strip_size_kb": 0, 00:15:07.583 "state": "online", 00:15:07.583 "raid_level": "raid1", 00:15:07.583 "superblock": true, 00:15:07.583 "num_base_bdevs": 2, 00:15:07.583 "num_base_bdevs_discovered": 1, 00:15:07.583 "num_base_bdevs_operational": 1, 00:15:07.583 "base_bdevs_list": [ 00:15:07.583 { 00:15:07.583 "name": null, 00:15:07.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.583 "is_configured": false, 00:15:07.583 "data_offset": 0, 00:15:07.583 "data_size": 7936 00:15:07.583 }, 00:15:07.583 { 00:15:07.583 "name": "BaseBdev2", 00:15:07.583 "uuid": "80a12f72-75ee-5161-b81e-ccda27f70785", 00:15:07.583 "is_configured": true, 00:15:07.583 "data_offset": 256, 00:15:07.583 "data_size": 7936 00:15:07.583 } 00:15:07.583 ] 00:15:07.583 }' 00:15:07.583 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.583 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.842 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:07.842 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.842 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.842 [2024-12-06 11:58:05.585209] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:07.842 [2024-12-06 11:58:05.585382] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:07.842 [2024-12-06 11:58:05.585396] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:07.842 [2024-12-06 11:58:05.585434] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:07.842 [2024-12-06 11:58:05.589553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb280 00:15:07.842 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.842 11:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:07.842 [2024-12-06 11:58:05.591366] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.225 "name": "raid_bdev1", 00:15:09.225 "uuid": "985be003-fcb5-4ad4-836e-d863a5896bbe", 00:15:09.225 "strip_size_kb": 0, 00:15:09.225 "state": "online", 00:15:09.225 "raid_level": "raid1", 00:15:09.225 "superblock": true, 00:15:09.225 "num_base_bdevs": 2, 00:15:09.225 "num_base_bdevs_discovered": 2, 00:15:09.225 "num_base_bdevs_operational": 2, 00:15:09.225 "process": { 00:15:09.225 "type": "rebuild", 00:15:09.225 "target": "spare", 00:15:09.225 "progress": { 00:15:09.225 "blocks": 2560, 00:15:09.225 "percent": 32 00:15:09.225 } 00:15:09.225 }, 00:15:09.225 "base_bdevs_list": [ 00:15:09.225 { 00:15:09.225 "name": "spare", 00:15:09.225 "uuid": "d76b98a7-c4f0-53b8-8d72-548de907957c", 00:15:09.225 "is_configured": true, 00:15:09.225 "data_offset": 256, 00:15:09.225 "data_size": 7936 00:15:09.225 }, 00:15:09.225 { 00:15:09.225 "name": "BaseBdev2", 00:15:09.225 "uuid": "80a12f72-75ee-5161-b81e-ccda27f70785", 00:15:09.225 "is_configured": true, 00:15:09.225 "data_offset": 256, 00:15:09.225 "data_size": 7936 00:15:09.225 } 00:15:09.225 ] 00:15:09.225 }' 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.225 [2024-12-06 11:58:06.736199] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:09.225 [2024-12-06 11:58:06.795263] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:09.225 [2024-12-06 11:58:06.795318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.225 [2024-12-06 11:58:06.795334] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:09.225 [2024-12-06 11:58:06.795341] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.225 "name": "raid_bdev1", 00:15:09.225 "uuid": "985be003-fcb5-4ad4-836e-d863a5896bbe", 00:15:09.225 "strip_size_kb": 0, 00:15:09.225 "state": "online", 00:15:09.225 "raid_level": "raid1", 00:15:09.225 "superblock": true, 00:15:09.225 "num_base_bdevs": 2, 00:15:09.225 "num_base_bdevs_discovered": 1, 00:15:09.225 "num_base_bdevs_operational": 1, 00:15:09.225 "base_bdevs_list": [ 00:15:09.225 { 00:15:09.225 "name": null, 00:15:09.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.225 "is_configured": false, 00:15:09.225 "data_offset": 0, 00:15:09.225 "data_size": 7936 00:15:09.225 }, 00:15:09.225 { 00:15:09.225 "name": "BaseBdev2", 00:15:09.225 "uuid": "80a12f72-75ee-5161-b81e-ccda27f70785", 00:15:09.225 "is_configured": true, 00:15:09.225 "data_offset": 256, 00:15:09.225 "data_size": 7936 00:15:09.225 } 00:15:09.225 ] 00:15:09.225 }' 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.225 11:58:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.486 11:58:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:09.486 11:58:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.486 11:58:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.486 [2024-12-06 11:58:07.214682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:09.486 [2024-12-06 11:58:07.214740] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.486 [2024-12-06 11:58:07.214781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:09.486 [2024-12-06 11:58:07.214789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.486 [2024-12-06 11:58:07.215190] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.486 [2024-12-06 11:58:07.215213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:09.486 [2024-12-06 11:58:07.215300] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:09.486 [2024-12-06 11:58:07.215316] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:09.486 [2024-12-06 11:58:07.215329] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:09.486 [2024-12-06 11:58:07.215361] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:09.486 [2024-12-06 11:58:07.218545] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb350 00:15:09.486 [2024-12-06 11:58:07.220403] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:09.486 spare 00:15:09.486 11:58:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.486 11:58:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.870 "name": "raid_bdev1", 00:15:10.870 "uuid": "985be003-fcb5-4ad4-836e-d863a5896bbe", 00:15:10.870 "strip_size_kb": 0, 00:15:10.870 "state": "online", 00:15:10.870 "raid_level": "raid1", 00:15:10.870 "superblock": true, 00:15:10.870 "num_base_bdevs": 2, 00:15:10.870 "num_base_bdevs_discovered": 2, 00:15:10.870 "num_base_bdevs_operational": 2, 00:15:10.870 "process": { 00:15:10.870 "type": "rebuild", 00:15:10.870 "target": "spare", 00:15:10.870 "progress": { 00:15:10.870 "blocks": 2560, 00:15:10.870 "percent": 32 00:15:10.870 } 00:15:10.870 }, 00:15:10.870 "base_bdevs_list": [ 00:15:10.870 { 00:15:10.870 "name": "spare", 00:15:10.870 "uuid": "d76b98a7-c4f0-53b8-8d72-548de907957c", 00:15:10.870 "is_configured": true, 00:15:10.870 "data_offset": 256, 00:15:10.870 "data_size": 7936 00:15:10.870 }, 00:15:10.870 { 00:15:10.870 "name": "BaseBdev2", 00:15:10.870 "uuid": "80a12f72-75ee-5161-b81e-ccda27f70785", 00:15:10.870 "is_configured": true, 00:15:10.870 "data_offset": 256, 00:15:10.870 "data_size": 7936 00:15:10.870 } 00:15:10.870 ] 00:15:10.870 }' 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:10.870 [2024-12-06 11:58:08.383550] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:10.870 [2024-12-06 11:58:08.424317] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:10.870 [2024-12-06 11:58:08.424377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.870 [2024-12-06 11:58:08.424392] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:10.870 [2024-12-06 11:58:08.424402] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.870 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.870 "name": "raid_bdev1", 00:15:10.870 "uuid": "985be003-fcb5-4ad4-836e-d863a5896bbe", 00:15:10.870 "strip_size_kb": 0, 00:15:10.870 "state": "online", 00:15:10.870 "raid_level": "raid1", 00:15:10.870 "superblock": true, 00:15:10.870 "num_base_bdevs": 2, 00:15:10.870 "num_base_bdevs_discovered": 1, 00:15:10.870 "num_base_bdevs_operational": 1, 00:15:10.870 "base_bdevs_list": [ 00:15:10.870 { 00:15:10.870 "name": null, 00:15:10.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.870 "is_configured": false, 00:15:10.870 "data_offset": 0, 00:15:10.871 "data_size": 7936 00:15:10.871 }, 00:15:10.871 { 00:15:10.871 "name": "BaseBdev2", 00:15:10.871 "uuid": "80a12f72-75ee-5161-b81e-ccda27f70785", 00:15:10.871 "is_configured": true, 00:15:10.871 "data_offset": 256, 00:15:10.871 "data_size": 7936 00:15:10.871 } 00:15:10.871 ] 00:15:10.871 }' 00:15:10.871 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.871 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:11.131 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:11.131 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.391 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:11.391 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:11.392 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.392 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.392 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.392 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.392 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:11.392 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.392 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.392 "name": "raid_bdev1", 00:15:11.392 "uuid": "985be003-fcb5-4ad4-836e-d863a5896bbe", 00:15:11.392 "strip_size_kb": 0, 00:15:11.392 "state": "online", 00:15:11.392 "raid_level": "raid1", 00:15:11.392 "superblock": true, 00:15:11.392 "num_base_bdevs": 2, 00:15:11.392 "num_base_bdevs_discovered": 1, 00:15:11.392 "num_base_bdevs_operational": 1, 00:15:11.392 "base_bdevs_list": [ 00:15:11.392 { 00:15:11.392 "name": null, 00:15:11.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.392 "is_configured": false, 00:15:11.392 "data_offset": 0, 00:15:11.392 "data_size": 7936 00:15:11.392 }, 00:15:11.392 { 00:15:11.392 "name": "BaseBdev2", 00:15:11.392 "uuid": "80a12f72-75ee-5161-b81e-ccda27f70785", 00:15:11.392 "is_configured": true, 00:15:11.392 "data_offset": 256, 00:15:11.392 "data_size": 7936 00:15:11.392 } 00:15:11.392 ] 00:15:11.392 }' 00:15:11.392 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.392 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:11.392 11:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.392 11:58:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:11.392 11:58:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:11.392 11:58:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.392 11:58:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:11.392 11:58:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.392 11:58:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:11.392 11:58:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.392 11:58:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:11.392 [2024-12-06 11:58:09.031291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:11.392 [2024-12-06 11:58:09.031345] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.392 [2024-12-06 11:58:09.031388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:11.392 [2024-12-06 11:58:09.031398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.392 [2024-12-06 11:58:09.031810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.392 [2024-12-06 11:58:09.031840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:11.392 [2024-12-06 11:58:09.031906] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:11.392 [2024-12-06 11:58:09.031926] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:11.392 [2024-12-06 11:58:09.031934] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:11.392 [2024-12-06 11:58:09.031946] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:11.392 BaseBdev1 00:15:11.392 11:58:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.392 11:58:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:12.333 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:12.333 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.333 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.333 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.333 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.333 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:12.333 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.333 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.333 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.333 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.333 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.333 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.333 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.333 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.333 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.592 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.592 "name": "raid_bdev1", 00:15:12.592 "uuid": "985be003-fcb5-4ad4-836e-d863a5896bbe", 00:15:12.592 "strip_size_kb": 0, 00:15:12.592 "state": "online", 00:15:12.592 "raid_level": "raid1", 00:15:12.592 "superblock": true, 00:15:12.592 "num_base_bdevs": 2, 00:15:12.592 "num_base_bdevs_discovered": 1, 00:15:12.592 "num_base_bdevs_operational": 1, 00:15:12.592 "base_bdevs_list": [ 00:15:12.592 { 00:15:12.592 "name": null, 00:15:12.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.592 "is_configured": false, 00:15:12.592 "data_offset": 0, 00:15:12.592 "data_size": 7936 00:15:12.592 }, 00:15:12.592 { 00:15:12.592 "name": "BaseBdev2", 00:15:12.592 "uuid": "80a12f72-75ee-5161-b81e-ccda27f70785", 00:15:12.592 "is_configured": true, 00:15:12.592 "data_offset": 256, 00:15:12.592 "data_size": 7936 00:15:12.592 } 00:15:12.592 ] 00:15:12.592 }' 00:15:12.592 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.592 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.851 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:12.851 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.851 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:12.851 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:12.851 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.851 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.851 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.851 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.851 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.851 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.851 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.851 "name": "raid_bdev1", 00:15:12.851 "uuid": "985be003-fcb5-4ad4-836e-d863a5896bbe", 00:15:12.851 "strip_size_kb": 0, 00:15:12.851 "state": "online", 00:15:12.851 "raid_level": "raid1", 00:15:12.851 "superblock": true, 00:15:12.851 "num_base_bdevs": 2, 00:15:12.851 "num_base_bdevs_discovered": 1, 00:15:12.851 "num_base_bdevs_operational": 1, 00:15:12.851 "base_bdevs_list": [ 00:15:12.851 { 00:15:12.851 "name": null, 00:15:12.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.851 "is_configured": false, 00:15:12.851 "data_offset": 0, 00:15:12.851 "data_size": 7936 00:15:12.851 }, 00:15:12.851 { 00:15:12.851 "name": "BaseBdev2", 00:15:12.851 "uuid": "80a12f72-75ee-5161-b81e-ccda27f70785", 00:15:12.851 "is_configured": true, 00:15:12.851 "data_offset": 256, 00:15:12.851 "data_size": 7936 00:15:12.851 } 00:15:12.851 ] 00:15:12.851 }' 00:15:12.851 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.851 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:12.851 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.110 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:13.110 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:13.110 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:15:13.110 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:13.110 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:13.110 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:13.110 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:13.110 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:13.110 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:13.110 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.110 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:13.110 [2024-12-06 11:58:10.628603] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:13.110 [2024-12-06 11:58:10.628757] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:13.110 [2024-12-06 11:58:10.628770] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:13.110 request: 00:15:13.110 { 00:15:13.110 "base_bdev": "BaseBdev1", 00:15:13.110 "raid_bdev": "raid_bdev1", 00:15:13.110 "method": "bdev_raid_add_base_bdev", 00:15:13.110 "req_id": 1 00:15:13.110 } 00:15:13.110 Got JSON-RPC error response 00:15:13.110 response: 00:15:13.110 { 00:15:13.110 "code": -22, 00:15:13.110 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:13.110 } 00:15:13.110 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:13.110 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:15:13.110 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:13.110 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:13.110 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:13.110 11:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:14.049 11:58:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:14.049 11:58:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.049 11:58:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.049 11:58:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:14.049 11:58:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:14.049 11:58:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:14.049 11:58:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.049 11:58:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.049 11:58:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.049 11:58:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.049 11:58:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.049 11:58:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.049 11:58:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.049 11:58:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.049 11:58:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.049 11:58:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.049 "name": "raid_bdev1", 00:15:14.049 "uuid": "985be003-fcb5-4ad4-836e-d863a5896bbe", 00:15:14.049 "strip_size_kb": 0, 00:15:14.049 "state": "online", 00:15:14.049 "raid_level": "raid1", 00:15:14.049 "superblock": true, 00:15:14.049 "num_base_bdevs": 2, 00:15:14.049 "num_base_bdevs_discovered": 1, 00:15:14.049 "num_base_bdevs_operational": 1, 00:15:14.049 "base_bdevs_list": [ 00:15:14.049 { 00:15:14.049 "name": null, 00:15:14.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.049 "is_configured": false, 00:15:14.049 "data_offset": 0, 00:15:14.049 "data_size": 7936 00:15:14.049 }, 00:15:14.049 { 00:15:14.049 "name": "BaseBdev2", 00:15:14.049 "uuid": "80a12f72-75ee-5161-b81e-ccda27f70785", 00:15:14.049 "is_configured": true, 00:15:14.049 "data_offset": 256, 00:15:14.049 "data_size": 7936 00:15:14.049 } 00:15:14.049 ] 00:15:14.049 }' 00:15:14.049 11:58:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.049 11:58:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.618 11:58:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:14.618 11:58:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.618 11:58:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:14.618 11:58:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:14.618 11:58:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.618 11:58:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.618 11:58:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.618 11:58:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.618 11:58:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.618 11:58:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.618 11:58:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.618 "name": "raid_bdev1", 00:15:14.618 "uuid": "985be003-fcb5-4ad4-836e-d863a5896bbe", 00:15:14.618 "strip_size_kb": 0, 00:15:14.618 "state": "online", 00:15:14.618 "raid_level": "raid1", 00:15:14.618 "superblock": true, 00:15:14.618 "num_base_bdevs": 2, 00:15:14.618 "num_base_bdevs_discovered": 1, 00:15:14.618 "num_base_bdevs_operational": 1, 00:15:14.618 "base_bdevs_list": [ 00:15:14.618 { 00:15:14.618 "name": null, 00:15:14.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.618 "is_configured": false, 00:15:14.618 "data_offset": 0, 00:15:14.618 "data_size": 7936 00:15:14.618 }, 00:15:14.618 { 00:15:14.618 "name": "BaseBdev2", 00:15:14.618 "uuid": "80a12f72-75ee-5161-b81e-ccda27f70785", 00:15:14.618 "is_configured": true, 00:15:14.618 "data_offset": 256, 00:15:14.618 "data_size": 7936 00:15:14.618 } 00:15:14.618 ] 00:15:14.618 }' 00:15:14.618 11:58:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.618 11:58:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:14.618 11:58:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.618 11:58:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:14.618 11:58:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 96456 00:15:14.618 11:58:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 96456 ']' 00:15:14.618 11:58:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 96456 00:15:14.618 11:58:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:15:14.618 11:58:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:14.618 11:58:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96456 00:15:14.618 11:58:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:14.618 11:58:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:14.618 killing process with pid 96456 00:15:14.618 11:58:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96456' 00:15:14.618 11:58:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 96456 00:15:14.618 Received shutdown signal, test time was about 60.000000 seconds 00:15:14.618 00:15:14.618 Latency(us) 00:15:14.618 [2024-12-06T11:58:12.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.618 [2024-12-06T11:58:12.376Z] =================================================================================================================== 00:15:14.618 [2024-12-06T11:58:12.376Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:14.618 [2024-12-06 11:58:12.261067] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:14.618 [2024-12-06 11:58:12.261187] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:14.618 [2024-12-06 11:58:12.261256] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:14.618 [2024-12-06 11:58:12.261267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:15:14.618 11:58:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 96456 00:15:14.618 [2024-12-06 11:58:12.293585] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:14.878 11:58:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:15:14.878 00:15:14.878 real 0m18.081s 00:15:14.878 user 0m24.014s 00:15:14.878 sys 0m2.518s 00:15:14.878 11:58:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:14.878 11:58:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.878 ************************************ 00:15:14.878 END TEST raid_rebuild_test_sb_4k 00:15:14.878 ************************************ 00:15:14.878 11:58:12 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:15:14.878 11:58:12 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:15:14.878 11:58:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:14.878 11:58:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:14.878 11:58:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:14.878 ************************************ 00:15:14.878 START TEST raid_state_function_test_sb_md_separate 00:15:14.878 ************************************ 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=97130 00:15:14.878 Process raid pid: 97130 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 97130' 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 97130 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97130 ']' 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:14.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:14.878 11:58:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:15.137 [2024-12-06 11:58:12.696924] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:15.137 [2024-12-06 11:58:12.697031] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.137 [2024-12-06 11:58:12.841542] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.137 [2024-12-06 11:58:12.887226] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.396 [2024-12-06 11:58:12.930446] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:15.396 [2024-12-06 11:58:12.930481] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:15.964 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:15.964 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:15:15.964 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:15.964 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.964 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:15.964 [2024-12-06 11:58:13.516387] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:15.964 [2024-12-06 11:58:13.516440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:15.964 [2024-12-06 11:58:13.516452] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:15.964 [2024-12-06 11:58:13.516461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:15.964 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.964 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:15.964 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.964 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:15.964 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:15.964 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:15.964 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:15.964 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.964 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.964 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.964 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.964 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.964 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.964 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.964 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:15.964 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.964 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.964 "name": "Existed_Raid", 00:15:15.964 "uuid": "e905fbda-064e-4930-b05d-6df1bac28a5f", 00:15:15.964 "strip_size_kb": 0, 00:15:15.964 "state": "configuring", 00:15:15.964 "raid_level": "raid1", 00:15:15.964 "superblock": true, 00:15:15.964 "num_base_bdevs": 2, 00:15:15.964 "num_base_bdevs_discovered": 0, 00:15:15.964 "num_base_bdevs_operational": 2, 00:15:15.964 "base_bdevs_list": [ 00:15:15.964 { 00:15:15.964 "name": "BaseBdev1", 00:15:15.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.964 "is_configured": false, 00:15:15.964 "data_offset": 0, 00:15:15.964 "data_size": 0 00:15:15.964 }, 00:15:15.964 { 00:15:15.964 "name": "BaseBdev2", 00:15:15.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.964 "is_configured": false, 00:15:15.964 "data_offset": 0, 00:15:15.964 "data_size": 0 00:15:15.964 } 00:15:15.964 ] 00:15:15.964 }' 00:15:15.964 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.964 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:16.222 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:16.222 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.222 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:16.222 [2024-12-06 11:58:13.963486] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:16.223 [2024-12-06 11:58:13.963532] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:15:16.223 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.223 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:16.223 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.223 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:16.223 [2024-12-06 11:58:13.975497] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:16.223 [2024-12-06 11:58:13.975533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:16.223 [2024-12-06 11:58:13.975548] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:16.223 [2024-12-06 11:58:13.975558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:16.482 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.482 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:15:16.482 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.482 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:16.482 [2024-12-06 11:58:13.996990] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:16.482 BaseBdev1 00:15:16.482 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.482 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:16.482 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:16.482 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:16.482 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:15:16.482 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:16.482 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:16.482 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:16.482 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.482 11:58:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:16.482 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.482 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:16.482 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.482 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:16.482 [ 00:15:16.482 { 00:15:16.482 "name": "BaseBdev1", 00:15:16.482 "aliases": [ 00:15:16.482 "74f18d78-3819-48ed-87cb-14c5591694c4" 00:15:16.482 ], 00:15:16.482 "product_name": "Malloc disk", 00:15:16.482 "block_size": 4096, 00:15:16.482 "num_blocks": 8192, 00:15:16.482 "uuid": "74f18d78-3819-48ed-87cb-14c5591694c4", 00:15:16.482 "md_size": 32, 00:15:16.482 "md_interleave": false, 00:15:16.482 "dif_type": 0, 00:15:16.482 "assigned_rate_limits": { 00:15:16.482 "rw_ios_per_sec": 0, 00:15:16.482 "rw_mbytes_per_sec": 0, 00:15:16.482 "r_mbytes_per_sec": 0, 00:15:16.482 "w_mbytes_per_sec": 0 00:15:16.482 }, 00:15:16.482 "claimed": true, 00:15:16.482 "claim_type": "exclusive_write", 00:15:16.482 "zoned": false, 00:15:16.482 "supported_io_types": { 00:15:16.482 "read": true, 00:15:16.482 "write": true, 00:15:16.482 "unmap": true, 00:15:16.482 "flush": true, 00:15:16.482 "reset": true, 00:15:16.482 "nvme_admin": false, 00:15:16.482 "nvme_io": false, 00:15:16.482 "nvme_io_md": false, 00:15:16.482 "write_zeroes": true, 00:15:16.482 "zcopy": true, 00:15:16.482 "get_zone_info": false, 00:15:16.482 "zone_management": false, 00:15:16.482 "zone_append": false, 00:15:16.482 "compare": false, 00:15:16.482 "compare_and_write": false, 00:15:16.482 "abort": true, 00:15:16.482 "seek_hole": false, 00:15:16.482 "seek_data": false, 00:15:16.482 "copy": true, 00:15:16.482 "nvme_iov_md": false 00:15:16.482 }, 00:15:16.482 "memory_domains": [ 00:15:16.482 { 00:15:16.482 "dma_device_id": "system", 00:15:16.482 "dma_device_type": 1 00:15:16.482 }, 00:15:16.482 { 00:15:16.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.482 "dma_device_type": 2 00:15:16.482 } 00:15:16.482 ], 00:15:16.482 "driver_specific": {} 00:15:16.482 } 00:15:16.482 ] 00:15:16.482 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.482 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:15:16.482 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:16.482 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.482 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.482 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:16.482 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:16.482 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:16.482 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.482 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.482 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.482 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.482 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.482 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.482 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.482 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:16.482 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.482 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.482 "name": "Existed_Raid", 00:15:16.482 "uuid": "71e6400c-02ed-461b-9f75-b51778f754f0", 00:15:16.482 "strip_size_kb": 0, 00:15:16.482 "state": "configuring", 00:15:16.482 "raid_level": "raid1", 00:15:16.482 "superblock": true, 00:15:16.482 "num_base_bdevs": 2, 00:15:16.482 "num_base_bdevs_discovered": 1, 00:15:16.482 "num_base_bdevs_operational": 2, 00:15:16.482 "base_bdevs_list": [ 00:15:16.482 { 00:15:16.482 "name": "BaseBdev1", 00:15:16.482 "uuid": "74f18d78-3819-48ed-87cb-14c5591694c4", 00:15:16.482 "is_configured": true, 00:15:16.482 "data_offset": 256, 00:15:16.482 "data_size": 7936 00:15:16.482 }, 00:15:16.482 { 00:15:16.482 "name": "BaseBdev2", 00:15:16.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.482 "is_configured": false, 00:15:16.482 "data_offset": 0, 00:15:16.482 "data_size": 0 00:15:16.482 } 00:15:16.482 ] 00:15:16.482 }' 00:15:16.482 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.482 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:16.745 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:16.745 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.745 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:16.745 [2024-12-06 11:58:14.500152] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:16.745 [2024-12-06 11:58:14.500196] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:15:17.012 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.012 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:17.012 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.012 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:17.012 [2024-12-06 11:58:14.512195] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:17.012 [2024-12-06 11:58:14.514004] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:17.012 [2024-12-06 11:58:14.514042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:17.012 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.012 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:17.012 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:17.012 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:17.012 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.012 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:17.012 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.012 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.012 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:17.012 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.012 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.012 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.012 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.012 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.012 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.012 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.012 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:17.012 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.012 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.012 "name": "Existed_Raid", 00:15:17.012 "uuid": "7929f05b-ebd5-4596-9142-b0b252fbf079", 00:15:17.012 "strip_size_kb": 0, 00:15:17.012 "state": "configuring", 00:15:17.012 "raid_level": "raid1", 00:15:17.012 "superblock": true, 00:15:17.012 "num_base_bdevs": 2, 00:15:17.012 "num_base_bdevs_discovered": 1, 00:15:17.012 "num_base_bdevs_operational": 2, 00:15:17.012 "base_bdevs_list": [ 00:15:17.012 { 00:15:17.012 "name": "BaseBdev1", 00:15:17.012 "uuid": "74f18d78-3819-48ed-87cb-14c5591694c4", 00:15:17.012 "is_configured": true, 00:15:17.012 "data_offset": 256, 00:15:17.012 "data_size": 7936 00:15:17.012 }, 00:15:17.012 { 00:15:17.012 "name": "BaseBdev2", 00:15:17.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.012 "is_configured": false, 00:15:17.012 "data_offset": 0, 00:15:17.012 "data_size": 0 00:15:17.012 } 00:15:17.012 ] 00:15:17.012 }' 00:15:17.012 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.012 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:17.291 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:17.292 [2024-12-06 11:58:14.952709] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:17.292 [2024-12-06 11:58:14.953336] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:17.292 [2024-12-06 11:58:14.953401] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:17.292 BaseBdev2 00:15:17.292 [2024-12-06 11:58:14.953768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:17.292 [2024-12-06 11:58:14.954180] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:17.292 [2024-12-06 11:58:14.954288] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.292 [2024-12-06 11:58:14.954564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:17.292 [ 00:15:17.292 { 00:15:17.292 "name": "BaseBdev2", 00:15:17.292 "aliases": [ 00:15:17.292 "978ec701-e83d-47b6-9562-9338df8b0812" 00:15:17.292 ], 00:15:17.292 "product_name": "Malloc disk", 00:15:17.292 "block_size": 4096, 00:15:17.292 "num_blocks": 8192, 00:15:17.292 "uuid": "978ec701-e83d-47b6-9562-9338df8b0812", 00:15:17.292 "md_size": 32, 00:15:17.292 "md_interleave": false, 00:15:17.292 "dif_type": 0, 00:15:17.292 "assigned_rate_limits": { 00:15:17.292 "rw_ios_per_sec": 0, 00:15:17.292 "rw_mbytes_per_sec": 0, 00:15:17.292 "r_mbytes_per_sec": 0, 00:15:17.292 "w_mbytes_per_sec": 0 00:15:17.292 }, 00:15:17.292 "claimed": true, 00:15:17.292 "claim_type": "exclusive_write", 00:15:17.292 "zoned": false, 00:15:17.292 "supported_io_types": { 00:15:17.292 "read": true, 00:15:17.292 "write": true, 00:15:17.292 "unmap": true, 00:15:17.292 "flush": true, 00:15:17.292 "reset": true, 00:15:17.292 "nvme_admin": false, 00:15:17.292 "nvme_io": false, 00:15:17.292 "nvme_io_md": false, 00:15:17.292 "write_zeroes": true, 00:15:17.292 "zcopy": true, 00:15:17.292 "get_zone_info": false, 00:15:17.292 "zone_management": false, 00:15:17.292 "zone_append": false, 00:15:17.292 "compare": false, 00:15:17.292 "compare_and_write": false, 00:15:17.292 "abort": true, 00:15:17.292 "seek_hole": false, 00:15:17.292 "seek_data": false, 00:15:17.292 "copy": true, 00:15:17.292 "nvme_iov_md": false 00:15:17.292 }, 00:15:17.292 "memory_domains": [ 00:15:17.292 { 00:15:17.292 "dma_device_id": "system", 00:15:17.292 "dma_device_type": 1 00:15:17.292 }, 00:15:17.292 { 00:15:17.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.292 "dma_device_type": 2 00:15:17.292 } 00:15:17.292 ], 00:15:17.292 "driver_specific": {} 00:15:17.292 } 00:15:17.292 ] 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.292 11:58:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:17.292 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.292 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.292 "name": "Existed_Raid", 00:15:17.292 "uuid": "7929f05b-ebd5-4596-9142-b0b252fbf079", 00:15:17.292 "strip_size_kb": 0, 00:15:17.292 "state": "online", 00:15:17.292 "raid_level": "raid1", 00:15:17.292 "superblock": true, 00:15:17.292 "num_base_bdevs": 2, 00:15:17.292 "num_base_bdevs_discovered": 2, 00:15:17.292 "num_base_bdevs_operational": 2, 00:15:17.292 "base_bdevs_list": [ 00:15:17.292 { 00:15:17.292 "name": "BaseBdev1", 00:15:17.292 "uuid": "74f18d78-3819-48ed-87cb-14c5591694c4", 00:15:17.292 "is_configured": true, 00:15:17.292 "data_offset": 256, 00:15:17.292 "data_size": 7936 00:15:17.292 }, 00:15:17.292 { 00:15:17.292 "name": "BaseBdev2", 00:15:17.292 "uuid": "978ec701-e83d-47b6-9562-9338df8b0812", 00:15:17.292 "is_configured": true, 00:15:17.292 "data_offset": 256, 00:15:17.292 "data_size": 7936 00:15:17.292 } 00:15:17.292 ] 00:15:17.292 }' 00:15:17.292 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.292 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:17.870 [2024-12-06 11:58:15.432184] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:17.870 "name": "Existed_Raid", 00:15:17.870 "aliases": [ 00:15:17.870 "7929f05b-ebd5-4596-9142-b0b252fbf079" 00:15:17.870 ], 00:15:17.870 "product_name": "Raid Volume", 00:15:17.870 "block_size": 4096, 00:15:17.870 "num_blocks": 7936, 00:15:17.870 "uuid": "7929f05b-ebd5-4596-9142-b0b252fbf079", 00:15:17.870 "md_size": 32, 00:15:17.870 "md_interleave": false, 00:15:17.870 "dif_type": 0, 00:15:17.870 "assigned_rate_limits": { 00:15:17.870 "rw_ios_per_sec": 0, 00:15:17.870 "rw_mbytes_per_sec": 0, 00:15:17.870 "r_mbytes_per_sec": 0, 00:15:17.870 "w_mbytes_per_sec": 0 00:15:17.870 }, 00:15:17.870 "claimed": false, 00:15:17.870 "zoned": false, 00:15:17.870 "supported_io_types": { 00:15:17.870 "read": true, 00:15:17.870 "write": true, 00:15:17.870 "unmap": false, 00:15:17.870 "flush": false, 00:15:17.870 "reset": true, 00:15:17.870 "nvme_admin": false, 00:15:17.870 "nvme_io": false, 00:15:17.870 "nvme_io_md": false, 00:15:17.870 "write_zeroes": true, 00:15:17.870 "zcopy": false, 00:15:17.870 "get_zone_info": false, 00:15:17.870 "zone_management": false, 00:15:17.870 "zone_append": false, 00:15:17.870 "compare": false, 00:15:17.870 "compare_and_write": false, 00:15:17.870 "abort": false, 00:15:17.870 "seek_hole": false, 00:15:17.870 "seek_data": false, 00:15:17.870 "copy": false, 00:15:17.870 "nvme_iov_md": false 00:15:17.870 }, 00:15:17.870 "memory_domains": [ 00:15:17.870 { 00:15:17.870 "dma_device_id": "system", 00:15:17.870 "dma_device_type": 1 00:15:17.870 }, 00:15:17.870 { 00:15:17.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.870 "dma_device_type": 2 00:15:17.870 }, 00:15:17.870 { 00:15:17.870 "dma_device_id": "system", 00:15:17.870 "dma_device_type": 1 00:15:17.870 }, 00:15:17.870 { 00:15:17.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.870 "dma_device_type": 2 00:15:17.870 } 00:15:17.870 ], 00:15:17.870 "driver_specific": { 00:15:17.870 "raid": { 00:15:17.870 "uuid": "7929f05b-ebd5-4596-9142-b0b252fbf079", 00:15:17.870 "strip_size_kb": 0, 00:15:17.870 "state": "online", 00:15:17.870 "raid_level": "raid1", 00:15:17.870 "superblock": true, 00:15:17.870 "num_base_bdevs": 2, 00:15:17.870 "num_base_bdevs_discovered": 2, 00:15:17.870 "num_base_bdevs_operational": 2, 00:15:17.870 "base_bdevs_list": [ 00:15:17.870 { 00:15:17.870 "name": "BaseBdev1", 00:15:17.870 "uuid": "74f18d78-3819-48ed-87cb-14c5591694c4", 00:15:17.870 "is_configured": true, 00:15:17.870 "data_offset": 256, 00:15:17.870 "data_size": 7936 00:15:17.870 }, 00:15:17.870 { 00:15:17.870 "name": "BaseBdev2", 00:15:17.870 "uuid": "978ec701-e83d-47b6-9562-9338df8b0812", 00:15:17.870 "is_configured": true, 00:15:17.870 "data_offset": 256, 00:15:17.870 "data_size": 7936 00:15:17.870 } 00:15:17.870 ] 00:15:17.870 } 00:15:17.870 } 00:15:17.870 }' 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:17.870 BaseBdev2' 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.870 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.129 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:18.129 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:18.129 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:18.129 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.129 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:18.129 [2024-12-06 11:58:15.631625] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:18.129 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.129 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:18.129 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:18.129 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:18.129 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:15:18.129 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:18.129 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:18.129 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.129 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.129 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.129 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.129 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:18.129 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.129 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.129 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.129 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.129 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.129 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.129 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.129 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:18.129 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.129 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.129 "name": "Existed_Raid", 00:15:18.129 "uuid": "7929f05b-ebd5-4596-9142-b0b252fbf079", 00:15:18.129 "strip_size_kb": 0, 00:15:18.129 "state": "online", 00:15:18.129 "raid_level": "raid1", 00:15:18.129 "superblock": true, 00:15:18.129 "num_base_bdevs": 2, 00:15:18.129 "num_base_bdevs_discovered": 1, 00:15:18.129 "num_base_bdevs_operational": 1, 00:15:18.129 "base_bdevs_list": [ 00:15:18.129 { 00:15:18.129 "name": null, 00:15:18.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.129 "is_configured": false, 00:15:18.129 "data_offset": 0, 00:15:18.129 "data_size": 7936 00:15:18.129 }, 00:15:18.129 { 00:15:18.129 "name": "BaseBdev2", 00:15:18.129 "uuid": "978ec701-e83d-47b6-9562-9338df8b0812", 00:15:18.129 "is_configured": true, 00:15:18.129 "data_offset": 256, 00:15:18.129 "data_size": 7936 00:15:18.129 } 00:15:18.129 ] 00:15:18.129 }' 00:15:18.129 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.130 11:58:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:18.389 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:18.389 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:18.389 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.389 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.389 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:18.389 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:18.648 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.648 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:18.648 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:18.648 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:18.648 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.648 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:18.648 [2024-12-06 11:58:16.186974] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:18.648 [2024-12-06 11:58:16.187069] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.648 [2024-12-06 11:58:16.199566] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.648 [2024-12-06 11:58:16.199619] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:18.648 [2024-12-06 11:58:16.199638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:15:18.648 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.648 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:18.648 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:18.648 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.648 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:18.648 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.648 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:18.648 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.648 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:18.648 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:18.648 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:18.648 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 97130 00:15:18.648 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97130 ']' 00:15:18.648 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 97130 00:15:18.648 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:15:18.648 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:18.648 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97130 00:15:18.648 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:18.648 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:18.648 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97130' 00:15:18.648 killing process with pid 97130 00:15:18.648 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 97130 00:15:18.648 [2024-12-06 11:58:16.291662] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:18.648 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 97130 00:15:18.649 [2024-12-06 11:58:16.292630] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:18.909 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:15:18.909 00:15:18.909 real 0m3.935s 00:15:18.909 user 0m6.171s 00:15:18.909 sys 0m0.806s 00:15:18.909 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:18.909 11:58:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:18.909 ************************************ 00:15:18.909 END TEST raid_state_function_test_sb_md_separate 00:15:18.909 ************************************ 00:15:18.909 11:58:16 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:15:18.909 11:58:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:18.909 11:58:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:18.909 11:58:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:18.909 ************************************ 00:15:18.909 START TEST raid_superblock_test_md_separate 00:15:18.909 ************************************ 00:15:18.909 11:58:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:15:18.909 11:58:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:18.909 11:58:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:18.909 11:58:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:18.909 11:58:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:18.909 11:58:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:18.909 11:58:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:18.909 11:58:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:18.909 11:58:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:18.909 11:58:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:18.909 11:58:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:18.909 11:58:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:18.909 11:58:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:18.909 11:58:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:18.909 11:58:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:18.909 11:58:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:18.909 11:58:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=97366 00:15:18.909 11:58:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:18.909 11:58:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 97366 00:15:18.909 11:58:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97366 ']' 00:15:18.909 11:58:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.909 11:58:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:18.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.909 11:58:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.909 11:58:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:18.909 11:58:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:19.177 [2024-12-06 11:58:16.711555] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:19.177 [2024-12-06 11:58:16.711702] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97366 ] 00:15:19.177 [2024-12-06 11:58:16.857360] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.177 [2024-12-06 11:58:16.902344] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.444 [2024-12-06 11:58:16.944989] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:19.444 [2024-12-06 11:58:16.945030] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:20.013 malloc1 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:20.013 [2024-12-06 11:58:17.544228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:20.013 [2024-12-06 11:58:17.544307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.013 [2024-12-06 11:58:17.544327] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:20.013 [2024-12-06 11:58:17.544344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.013 [2024-12-06 11:58:17.546159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.013 [2024-12-06 11:58:17.546200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:20.013 pt1 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:20.013 malloc2 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:20.013 [2024-12-06 11:58:17.591092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:20.013 [2024-12-06 11:58:17.591196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.013 [2024-12-06 11:58:17.591258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:20.013 [2024-12-06 11:58:17.591287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.013 [2024-12-06 11:58:17.595629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.013 [2024-12-06 11:58:17.595697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:20.013 pt2 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.013 11:58:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:20.013 [2024-12-06 11:58:17.604000] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:20.013 [2024-12-06 11:58:17.606716] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:20.013 [2024-12-06 11:58:17.606909] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:20.013 [2024-12-06 11:58:17.606930] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:20.013 [2024-12-06 11:58:17.607017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:20.013 [2024-12-06 11:58:17.607141] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:20.013 [2024-12-06 11:58:17.607162] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:20.013 [2024-12-06 11:58:17.607280] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.014 11:58:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.014 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:20.014 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.014 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.014 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.014 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.014 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:20.014 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.014 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.014 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.014 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.014 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.014 11:58:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.014 11:58:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:20.014 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.014 11:58:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.014 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.014 "name": "raid_bdev1", 00:15:20.014 "uuid": "b5cee876-7aac-4755-9d68-97425a8de830", 00:15:20.014 "strip_size_kb": 0, 00:15:20.014 "state": "online", 00:15:20.014 "raid_level": "raid1", 00:15:20.014 "superblock": true, 00:15:20.014 "num_base_bdevs": 2, 00:15:20.014 "num_base_bdevs_discovered": 2, 00:15:20.014 "num_base_bdevs_operational": 2, 00:15:20.014 "base_bdevs_list": [ 00:15:20.014 { 00:15:20.014 "name": "pt1", 00:15:20.014 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:20.014 "is_configured": true, 00:15:20.014 "data_offset": 256, 00:15:20.014 "data_size": 7936 00:15:20.014 }, 00:15:20.014 { 00:15:20.014 "name": "pt2", 00:15:20.014 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:20.014 "is_configured": true, 00:15:20.014 "data_offset": 256, 00:15:20.014 "data_size": 7936 00:15:20.014 } 00:15:20.014 ] 00:15:20.014 }' 00:15:20.014 11:58:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.014 11:58:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:20.610 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:20.610 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:20.610 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:20.610 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:20.610 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:20.610 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:20.610 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:20.610 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:20.610 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.610 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:20.610 [2024-12-06 11:58:18.067415] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:20.610 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.610 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:20.610 "name": "raid_bdev1", 00:15:20.610 "aliases": [ 00:15:20.610 "b5cee876-7aac-4755-9d68-97425a8de830" 00:15:20.610 ], 00:15:20.610 "product_name": "Raid Volume", 00:15:20.610 "block_size": 4096, 00:15:20.610 "num_blocks": 7936, 00:15:20.610 "uuid": "b5cee876-7aac-4755-9d68-97425a8de830", 00:15:20.611 "md_size": 32, 00:15:20.611 "md_interleave": false, 00:15:20.611 "dif_type": 0, 00:15:20.611 "assigned_rate_limits": { 00:15:20.611 "rw_ios_per_sec": 0, 00:15:20.611 "rw_mbytes_per_sec": 0, 00:15:20.611 "r_mbytes_per_sec": 0, 00:15:20.611 "w_mbytes_per_sec": 0 00:15:20.611 }, 00:15:20.611 "claimed": false, 00:15:20.611 "zoned": false, 00:15:20.611 "supported_io_types": { 00:15:20.611 "read": true, 00:15:20.611 "write": true, 00:15:20.611 "unmap": false, 00:15:20.611 "flush": false, 00:15:20.611 "reset": true, 00:15:20.611 "nvme_admin": false, 00:15:20.611 "nvme_io": false, 00:15:20.611 "nvme_io_md": false, 00:15:20.611 "write_zeroes": true, 00:15:20.611 "zcopy": false, 00:15:20.611 "get_zone_info": false, 00:15:20.611 "zone_management": false, 00:15:20.611 "zone_append": false, 00:15:20.611 "compare": false, 00:15:20.611 "compare_and_write": false, 00:15:20.611 "abort": false, 00:15:20.611 "seek_hole": false, 00:15:20.611 "seek_data": false, 00:15:20.611 "copy": false, 00:15:20.611 "nvme_iov_md": false 00:15:20.611 }, 00:15:20.611 "memory_domains": [ 00:15:20.611 { 00:15:20.611 "dma_device_id": "system", 00:15:20.611 "dma_device_type": 1 00:15:20.611 }, 00:15:20.611 { 00:15:20.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.611 "dma_device_type": 2 00:15:20.611 }, 00:15:20.611 { 00:15:20.611 "dma_device_id": "system", 00:15:20.611 "dma_device_type": 1 00:15:20.611 }, 00:15:20.611 { 00:15:20.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.611 "dma_device_type": 2 00:15:20.611 } 00:15:20.611 ], 00:15:20.611 "driver_specific": { 00:15:20.611 "raid": { 00:15:20.611 "uuid": "b5cee876-7aac-4755-9d68-97425a8de830", 00:15:20.611 "strip_size_kb": 0, 00:15:20.611 "state": "online", 00:15:20.611 "raid_level": "raid1", 00:15:20.611 "superblock": true, 00:15:20.611 "num_base_bdevs": 2, 00:15:20.611 "num_base_bdevs_discovered": 2, 00:15:20.611 "num_base_bdevs_operational": 2, 00:15:20.611 "base_bdevs_list": [ 00:15:20.611 { 00:15:20.611 "name": "pt1", 00:15:20.611 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:20.611 "is_configured": true, 00:15:20.611 "data_offset": 256, 00:15:20.611 "data_size": 7936 00:15:20.611 }, 00:15:20.611 { 00:15:20.611 "name": "pt2", 00:15:20.611 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:20.611 "is_configured": true, 00:15:20.611 "data_offset": 256, 00:15:20.611 "data_size": 7936 00:15:20.611 } 00:15:20.611 ] 00:15:20.611 } 00:15:20.611 } 00:15:20.611 }' 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:20.611 pt2' 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:20.611 [2024-12-06 11:58:18.270932] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b5cee876-7aac-4755-9d68-97425a8de830 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z b5cee876-7aac-4755-9d68-97425a8de830 ']' 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:20.611 [2024-12-06 11:58:18.318648] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:20.611 [2024-12-06 11:58:18.318675] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:20.611 [2024-12-06 11:58:18.318736] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:20.611 [2024-12-06 11:58:18.318783] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:20.611 [2024-12-06 11:58:18.318797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:20.611 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.872 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:20.872 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:20.872 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:20.872 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:20.872 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.872 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:20.872 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.872 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:20.872 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:20.872 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.872 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:20.872 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.872 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:20.872 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.872 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:20.872 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:20.872 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.872 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:20.872 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:20.872 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:15:20.872 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:20.872 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:20.872 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:20.872 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:20.872 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:20.872 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:20.872 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.872 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:20.872 [2024-12-06 11:58:18.450437] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:20.872 [2024-12-06 11:58:18.452228] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:20.872 [2024-12-06 11:58:18.452300] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:20.872 [2024-12-06 11:58:18.452333] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:20.872 [2024-12-06 11:58:18.452348] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:20.873 [2024-12-06 11:58:18.452357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:15:20.873 request: 00:15:20.873 { 00:15:20.873 "name": "raid_bdev1", 00:15:20.873 "raid_level": "raid1", 00:15:20.873 "base_bdevs": [ 00:15:20.873 "malloc1", 00:15:20.873 "malloc2" 00:15:20.873 ], 00:15:20.873 "superblock": false, 00:15:20.873 "method": "bdev_raid_create", 00:15:20.873 "req_id": 1 00:15:20.873 } 00:15:20.873 Got JSON-RPC error response 00:15:20.873 response: 00:15:20.873 { 00:15:20.873 "code": -17, 00:15:20.873 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:20.873 } 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:20.873 [2024-12-06 11:58:18.502322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:20.873 [2024-12-06 11:58:18.502371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.873 [2024-12-06 11:58:18.502389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:20.873 [2024-12-06 11:58:18.502397] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.873 [2024-12-06 11:58:18.504339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.873 [2024-12-06 11:58:18.504369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:20.873 [2024-12-06 11:58:18.504411] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:20.873 [2024-12-06 11:58:18.504454] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:20.873 pt1 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.873 "name": "raid_bdev1", 00:15:20.873 "uuid": "b5cee876-7aac-4755-9d68-97425a8de830", 00:15:20.873 "strip_size_kb": 0, 00:15:20.873 "state": "configuring", 00:15:20.873 "raid_level": "raid1", 00:15:20.873 "superblock": true, 00:15:20.873 "num_base_bdevs": 2, 00:15:20.873 "num_base_bdevs_discovered": 1, 00:15:20.873 "num_base_bdevs_operational": 2, 00:15:20.873 "base_bdevs_list": [ 00:15:20.873 { 00:15:20.873 "name": "pt1", 00:15:20.873 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:20.873 "is_configured": true, 00:15:20.873 "data_offset": 256, 00:15:20.873 "data_size": 7936 00:15:20.873 }, 00:15:20.873 { 00:15:20.873 "name": null, 00:15:20.873 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:20.873 "is_configured": false, 00:15:20.873 "data_offset": 256, 00:15:20.873 "data_size": 7936 00:15:20.873 } 00:15:20.873 ] 00:15:20.873 }' 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.873 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:21.443 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:21.443 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:21.443 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:21.443 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:21.443 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.443 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:21.443 [2024-12-06 11:58:18.941578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:21.443 [2024-12-06 11:58:18.941624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.443 [2024-12-06 11:58:18.941643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:21.443 [2024-12-06 11:58:18.941652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.443 [2024-12-06 11:58:18.941810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.443 [2024-12-06 11:58:18.941824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:21.443 [2024-12-06 11:58:18.941864] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:21.443 [2024-12-06 11:58:18.941889] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:21.443 [2024-12-06 11:58:18.941967] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:21.443 [2024-12-06 11:58:18.941974] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:21.443 [2024-12-06 11:58:18.942047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:21.443 [2024-12-06 11:58:18.942119] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:21.443 [2024-12-06 11:58:18.942131] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:15:21.443 [2024-12-06 11:58:18.942191] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.443 pt2 00:15:21.443 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.443 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:21.443 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:21.443 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:21.443 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.443 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.443 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.443 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.443 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:21.443 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.443 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.443 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.443 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.443 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.443 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.443 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:21.443 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.443 11:58:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.443 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.443 "name": "raid_bdev1", 00:15:21.443 "uuid": "b5cee876-7aac-4755-9d68-97425a8de830", 00:15:21.443 "strip_size_kb": 0, 00:15:21.443 "state": "online", 00:15:21.443 "raid_level": "raid1", 00:15:21.443 "superblock": true, 00:15:21.443 "num_base_bdevs": 2, 00:15:21.443 "num_base_bdevs_discovered": 2, 00:15:21.443 "num_base_bdevs_operational": 2, 00:15:21.443 "base_bdevs_list": [ 00:15:21.443 { 00:15:21.443 "name": "pt1", 00:15:21.443 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:21.443 "is_configured": true, 00:15:21.443 "data_offset": 256, 00:15:21.443 "data_size": 7936 00:15:21.443 }, 00:15:21.443 { 00:15:21.443 "name": "pt2", 00:15:21.443 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:21.443 "is_configured": true, 00:15:21.443 "data_offset": 256, 00:15:21.443 "data_size": 7936 00:15:21.443 } 00:15:21.443 ] 00:15:21.443 }' 00:15:21.444 11:58:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.444 11:58:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:21.704 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:21.704 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:21.704 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:21.704 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:21.704 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:21.704 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:21.704 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:21.704 11:58:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.704 11:58:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:21.704 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:21.704 [2024-12-06 11:58:19.341114] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:21.704 11:58:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.704 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:21.704 "name": "raid_bdev1", 00:15:21.704 "aliases": [ 00:15:21.704 "b5cee876-7aac-4755-9d68-97425a8de830" 00:15:21.704 ], 00:15:21.704 "product_name": "Raid Volume", 00:15:21.704 "block_size": 4096, 00:15:21.704 "num_blocks": 7936, 00:15:21.704 "uuid": "b5cee876-7aac-4755-9d68-97425a8de830", 00:15:21.704 "md_size": 32, 00:15:21.704 "md_interleave": false, 00:15:21.704 "dif_type": 0, 00:15:21.704 "assigned_rate_limits": { 00:15:21.704 "rw_ios_per_sec": 0, 00:15:21.704 "rw_mbytes_per_sec": 0, 00:15:21.704 "r_mbytes_per_sec": 0, 00:15:21.704 "w_mbytes_per_sec": 0 00:15:21.704 }, 00:15:21.704 "claimed": false, 00:15:21.704 "zoned": false, 00:15:21.704 "supported_io_types": { 00:15:21.704 "read": true, 00:15:21.704 "write": true, 00:15:21.704 "unmap": false, 00:15:21.704 "flush": false, 00:15:21.704 "reset": true, 00:15:21.704 "nvme_admin": false, 00:15:21.704 "nvme_io": false, 00:15:21.704 "nvme_io_md": false, 00:15:21.704 "write_zeroes": true, 00:15:21.704 "zcopy": false, 00:15:21.704 "get_zone_info": false, 00:15:21.704 "zone_management": false, 00:15:21.704 "zone_append": false, 00:15:21.704 "compare": false, 00:15:21.704 "compare_and_write": false, 00:15:21.704 "abort": false, 00:15:21.704 "seek_hole": false, 00:15:21.704 "seek_data": false, 00:15:21.704 "copy": false, 00:15:21.704 "nvme_iov_md": false 00:15:21.704 }, 00:15:21.704 "memory_domains": [ 00:15:21.704 { 00:15:21.704 "dma_device_id": "system", 00:15:21.704 "dma_device_type": 1 00:15:21.704 }, 00:15:21.704 { 00:15:21.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.704 "dma_device_type": 2 00:15:21.704 }, 00:15:21.704 { 00:15:21.704 "dma_device_id": "system", 00:15:21.704 "dma_device_type": 1 00:15:21.704 }, 00:15:21.704 { 00:15:21.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.704 "dma_device_type": 2 00:15:21.704 } 00:15:21.704 ], 00:15:21.704 "driver_specific": { 00:15:21.704 "raid": { 00:15:21.704 "uuid": "b5cee876-7aac-4755-9d68-97425a8de830", 00:15:21.704 "strip_size_kb": 0, 00:15:21.704 "state": "online", 00:15:21.704 "raid_level": "raid1", 00:15:21.704 "superblock": true, 00:15:21.704 "num_base_bdevs": 2, 00:15:21.704 "num_base_bdevs_discovered": 2, 00:15:21.704 "num_base_bdevs_operational": 2, 00:15:21.704 "base_bdevs_list": [ 00:15:21.704 { 00:15:21.704 "name": "pt1", 00:15:21.704 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:21.704 "is_configured": true, 00:15:21.704 "data_offset": 256, 00:15:21.704 "data_size": 7936 00:15:21.704 }, 00:15:21.704 { 00:15:21.704 "name": "pt2", 00:15:21.704 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:21.704 "is_configured": true, 00:15:21.704 "data_offset": 256, 00:15:21.704 "data_size": 7936 00:15:21.704 } 00:15:21.704 ] 00:15:21.704 } 00:15:21.704 } 00:15:21.704 }' 00:15:21.704 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:21.704 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:21.704 pt2' 00:15:21.704 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.704 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:21.704 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:21.704 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.704 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:21.704 11:58:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.704 11:58:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:21.965 [2024-12-06 11:58:19.556736] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' b5cee876-7aac-4755-9d68-97425a8de830 '!=' b5cee876-7aac-4755-9d68-97425a8de830 ']' 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:21.965 [2024-12-06 11:58:19.600477] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.965 "name": "raid_bdev1", 00:15:21.965 "uuid": "b5cee876-7aac-4755-9d68-97425a8de830", 00:15:21.965 "strip_size_kb": 0, 00:15:21.965 "state": "online", 00:15:21.965 "raid_level": "raid1", 00:15:21.965 "superblock": true, 00:15:21.965 "num_base_bdevs": 2, 00:15:21.965 "num_base_bdevs_discovered": 1, 00:15:21.965 "num_base_bdevs_operational": 1, 00:15:21.965 "base_bdevs_list": [ 00:15:21.965 { 00:15:21.965 "name": null, 00:15:21.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.965 "is_configured": false, 00:15:21.965 "data_offset": 0, 00:15:21.965 "data_size": 7936 00:15:21.965 }, 00:15:21.965 { 00:15:21.965 "name": "pt2", 00:15:21.965 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:21.965 "is_configured": true, 00:15:21.965 "data_offset": 256, 00:15:21.965 "data_size": 7936 00:15:21.965 } 00:15:21.965 ] 00:15:21.965 }' 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.965 11:58:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:22.536 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:22.536 11:58:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.536 11:58:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:22.536 [2024-12-06 11:58:20.059642] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:22.536 [2024-12-06 11:58:20.059671] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:22.536 [2024-12-06 11:58:20.059728] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.536 [2024-12-06 11:58:20.059771] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:22.536 [2024-12-06 11:58:20.059779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:15:22.536 11:58:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.536 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.536 11:58:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.536 11:58:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:22.536 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:22.536 11:58:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.536 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:22.536 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:22.536 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:22.536 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:22.536 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:22.536 11:58:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.536 11:58:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:22.536 11:58:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.536 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:22.536 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:22.536 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:22.536 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:22.536 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:15:22.536 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:22.536 11:58:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.536 11:58:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:22.536 [2024-12-06 11:58:20.131512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:22.536 [2024-12-06 11:58:20.131559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.536 [2024-12-06 11:58:20.131577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:22.536 [2024-12-06 11:58:20.131586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.536 [2024-12-06 11:58:20.133477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.536 [2024-12-06 11:58:20.133508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:22.536 [2024-12-06 11:58:20.133557] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:22.536 [2024-12-06 11:58:20.133596] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:22.536 [2024-12-06 11:58:20.133665] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:15:22.536 [2024-12-06 11:58:20.133679] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:22.536 [2024-12-06 11:58:20.133741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:22.536 [2024-12-06 11:58:20.133815] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:15:22.536 [2024-12-06 11:58:20.133829] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:15:22.536 [2024-12-06 11:58:20.133886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.536 pt2 00:15:22.536 11:58:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.536 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:22.537 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.537 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.537 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.537 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.537 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:22.537 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.537 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.537 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.537 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.537 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.537 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.537 11:58:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.537 11:58:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:22.537 11:58:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.537 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.537 "name": "raid_bdev1", 00:15:22.537 "uuid": "b5cee876-7aac-4755-9d68-97425a8de830", 00:15:22.537 "strip_size_kb": 0, 00:15:22.537 "state": "online", 00:15:22.537 "raid_level": "raid1", 00:15:22.537 "superblock": true, 00:15:22.537 "num_base_bdevs": 2, 00:15:22.537 "num_base_bdevs_discovered": 1, 00:15:22.537 "num_base_bdevs_operational": 1, 00:15:22.537 "base_bdevs_list": [ 00:15:22.537 { 00:15:22.537 "name": null, 00:15:22.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.537 "is_configured": false, 00:15:22.537 "data_offset": 256, 00:15:22.537 "data_size": 7936 00:15:22.537 }, 00:15:22.537 { 00:15:22.537 "name": "pt2", 00:15:22.537 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:22.537 "is_configured": true, 00:15:22.537 "data_offset": 256, 00:15:22.537 "data_size": 7936 00:15:22.537 } 00:15:22.537 ] 00:15:22.537 }' 00:15:22.537 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.537 11:58:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.107 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:23.107 11:58:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.107 11:58:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.107 [2024-12-06 11:58:20.563086] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:23.107 [2024-12-06 11:58:20.563110] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:23.107 [2024-12-06 11:58:20.563165] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:23.107 [2024-12-06 11:58:20.563198] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:23.107 [2024-12-06 11:58:20.563208] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:15:23.107 11:58:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.107 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:23.107 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.107 11:58:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.107 11:58:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.107 11:58:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.107 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:23.107 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:23.107 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:23.107 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:23.107 11:58:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.107 11:58:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.107 [2024-12-06 11:58:20.607027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:23.107 [2024-12-06 11:58:20.607072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.107 [2024-12-06 11:58:20.607087] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:15:23.107 [2024-12-06 11:58:20.607099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.107 [2024-12-06 11:58:20.608971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.107 [2024-12-06 11:58:20.609005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:23.107 [2024-12-06 11:58:20.609045] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:23.107 [2024-12-06 11:58:20.609072] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:23.107 [2024-12-06 11:58:20.609169] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:23.107 [2024-12-06 11:58:20.609189] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:23.107 [2024-12-06 11:58:20.609202] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:15:23.107 [2024-12-06 11:58:20.609242] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:23.107 [2024-12-06 11:58:20.609291] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:15:23.107 [2024-12-06 11:58:20.609302] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:23.107 [2024-12-06 11:58:20.609354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:23.107 [2024-12-06 11:58:20.609433] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:15:23.107 [2024-12-06 11:58:20.609443] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:15:23.107 [2024-12-06 11:58:20.609520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.107 pt1 00:15:23.107 11:58:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.107 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:23.107 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:23.108 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.108 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.108 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.108 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.108 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:23.108 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.108 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.108 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.108 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.108 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.108 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.108 11:58:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.108 11:58:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.108 11:58:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.108 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.108 "name": "raid_bdev1", 00:15:23.108 "uuid": "b5cee876-7aac-4755-9d68-97425a8de830", 00:15:23.108 "strip_size_kb": 0, 00:15:23.108 "state": "online", 00:15:23.108 "raid_level": "raid1", 00:15:23.108 "superblock": true, 00:15:23.108 "num_base_bdevs": 2, 00:15:23.108 "num_base_bdevs_discovered": 1, 00:15:23.108 "num_base_bdevs_operational": 1, 00:15:23.108 "base_bdevs_list": [ 00:15:23.108 { 00:15:23.108 "name": null, 00:15:23.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.108 "is_configured": false, 00:15:23.108 "data_offset": 256, 00:15:23.108 "data_size": 7936 00:15:23.108 }, 00:15:23.108 { 00:15:23.108 "name": "pt2", 00:15:23.108 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:23.108 "is_configured": true, 00:15:23.108 "data_offset": 256, 00:15:23.108 "data_size": 7936 00:15:23.108 } 00:15:23.108 ] 00:15:23.108 }' 00:15:23.108 11:58:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.108 11:58:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.368 11:58:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:23.368 11:58:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:23.368 11:58:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.368 11:58:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.368 11:58:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.368 11:58:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:23.368 11:58:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:23.368 11:58:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.368 11:58:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.368 11:58:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:23.368 [2024-12-06 11:58:21.058501] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:23.368 11:58:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.368 11:58:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' b5cee876-7aac-4755-9d68-97425a8de830 '!=' b5cee876-7aac-4755-9d68-97425a8de830 ']' 00:15:23.368 11:58:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 97366 00:15:23.368 11:58:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97366 ']' 00:15:23.368 11:58:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 97366 00:15:23.368 11:58:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:15:23.368 11:58:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:23.368 11:58:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97366 00:15:23.628 11:58:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:23.629 11:58:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:23.629 killing process with pid 97366 00:15:23.629 11:58:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97366' 00:15:23.629 11:58:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 97366 00:15:23.629 [2024-12-06 11:58:21.145601] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:23.629 [2024-12-06 11:58:21.145672] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:23.629 [2024-12-06 11:58:21.145716] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:23.629 [2024-12-06 11:58:21.145724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:15:23.629 11:58:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 97366 00:15:23.629 [2024-12-06 11:58:21.170159] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:23.889 11:58:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:15:23.889 00:15:23.889 real 0m4.799s 00:15:23.889 user 0m7.765s 00:15:23.889 sys 0m1.021s 00:15:23.889 11:58:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:23.889 11:58:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.889 ************************************ 00:15:23.889 END TEST raid_superblock_test_md_separate 00:15:23.889 ************************************ 00:15:23.889 11:58:21 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:15:23.889 11:58:21 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:15:23.889 11:58:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:23.889 11:58:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:23.889 11:58:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:23.889 ************************************ 00:15:23.889 START TEST raid_rebuild_test_sb_md_separate 00:15:23.889 ************************************ 00:15:23.889 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:15:23.889 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:23.889 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:23.889 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:23.889 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:23.889 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:23.890 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:23.890 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:23.890 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:23.890 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:23.890 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:23.890 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:23.890 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:23.890 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:23.890 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:23.890 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:23.890 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:23.890 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:23.890 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:23.890 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:23.890 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:23.890 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:23.890 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:23.890 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:23.890 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:23.890 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=97678 00:15:23.890 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:23.890 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 97678 00:15:23.890 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97678 ']' 00:15:23.890 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.890 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:23.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.890 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.890 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:23.890 11:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.890 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:23.890 Zero copy mechanism will not be used. 00:15:23.890 [2024-12-06 11:58:21.597586] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:23.890 [2024-12-06 11:58:21.597709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97678 ] 00:15:24.150 [2024-12-06 11:58:21.744679] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.150 [2024-12-06 11:58:21.790310] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.150 [2024-12-06 11:58:21.833480] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:24.150 [2024-12-06 11:58:21.833515] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:24.721 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:24.721 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:15:24.721 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:24.721 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:15:24.721 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.721 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.721 BaseBdev1_malloc 00:15:24.721 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.721 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:24.721 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.721 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.721 [2024-12-06 11:58:22.409104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:24.722 [2024-12-06 11:58:22.409164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.722 [2024-12-06 11:58:22.409190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:24.722 [2024-12-06 11:58:22.409204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.722 [2024-12-06 11:58:22.411115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.722 [2024-12-06 11:58:22.411149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:24.722 BaseBdev1 00:15:24.722 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.722 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:24.722 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:15:24.722 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.722 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.722 BaseBdev2_malloc 00:15:24.722 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.722 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:24.722 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.722 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.722 [2024-12-06 11:58:22.450070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:24.722 [2024-12-06 11:58:22.450159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.722 [2024-12-06 11:58:22.450206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:24.722 [2024-12-06 11:58:22.450228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.722 [2024-12-06 11:58:22.454567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.722 [2024-12-06 11:58:22.454628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:24.722 BaseBdev2 00:15:24.722 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.722 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:15:24.722 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.722 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.722 spare_malloc 00:15:24.722 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.722 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:24.722 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.722 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.983 spare_delay 00:15:24.983 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.983 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:24.983 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.983 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.983 [2024-12-06 11:58:22.493479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:24.983 [2024-12-06 11:58:22.493520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.983 [2024-12-06 11:58:22.493539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:24.983 [2024-12-06 11:58:22.493550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.983 [2024-12-06 11:58:22.495470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.983 [2024-12-06 11:58:22.495500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:24.983 spare 00:15:24.983 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.983 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:24.983 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.983 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.983 [2024-12-06 11:58:22.505521] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.983 [2024-12-06 11:58:22.507343] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:24.983 [2024-12-06 11:58:22.507519] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:24.983 [2024-12-06 11:58:22.507532] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:24.983 [2024-12-06 11:58:22.507607] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:24.983 [2024-12-06 11:58:22.507718] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:24.983 [2024-12-06 11:58:22.507740] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:24.983 [2024-12-06 11:58:22.507831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.983 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.983 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:24.983 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.983 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.983 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.983 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.983 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:24.983 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.983 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.983 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.983 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.983 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.983 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.983 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.983 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.983 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.983 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.983 "name": "raid_bdev1", 00:15:24.983 "uuid": "843cf22b-cf36-4d46-9693-0fbde8629cf9", 00:15:24.983 "strip_size_kb": 0, 00:15:24.983 "state": "online", 00:15:24.983 "raid_level": "raid1", 00:15:24.983 "superblock": true, 00:15:24.983 "num_base_bdevs": 2, 00:15:24.983 "num_base_bdevs_discovered": 2, 00:15:24.983 "num_base_bdevs_operational": 2, 00:15:24.983 "base_bdevs_list": [ 00:15:24.983 { 00:15:24.983 "name": "BaseBdev1", 00:15:24.983 "uuid": "f7090b82-599f-5b95-aa8f-9f5eb70f13a1", 00:15:24.983 "is_configured": true, 00:15:24.983 "data_offset": 256, 00:15:24.983 "data_size": 7936 00:15:24.983 }, 00:15:24.983 { 00:15:24.983 "name": "BaseBdev2", 00:15:24.984 "uuid": "188357c6-efd9-578a-abdb-a0b31e7cdadf", 00:15:24.984 "is_configured": true, 00:15:24.984 "data_offset": 256, 00:15:24.984 "data_size": 7936 00:15:24.984 } 00:15:24.984 ] 00:15:24.984 }' 00:15:24.984 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.984 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:25.243 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:25.244 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:25.244 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.244 11:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:25.244 [2024-12-06 11:58:22.980896] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:25.504 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.504 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:25.504 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.504 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:25.504 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.504 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:25.504 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.504 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:25.504 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:25.504 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:25.504 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:25.504 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:25.504 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:25.504 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:25.504 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:25.504 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:25.504 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:25.504 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:15:25.504 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:25.504 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:25.504 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:25.504 [2024-12-06 11:58:23.232352] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:25.504 /dev/nbd0 00:15:25.764 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:25.764 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:25.764 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:25.764 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:15:25.764 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:25.764 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:25.764 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:25.764 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:15:25.764 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:25.764 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:25.764 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:25.764 1+0 records in 00:15:25.764 1+0 records out 00:15:25.764 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386634 s, 10.6 MB/s 00:15:25.764 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.764 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:15:25.764 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.764 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:25.764 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:15:25.765 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:25.765 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:25.765 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:25.765 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:25.765 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:15:26.335 7936+0 records in 00:15:26.335 7936+0 records out 00:15:26.335 32505856 bytes (33 MB, 31 MiB) copied, 0.538177 s, 60.4 MB/s 00:15:26.335 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:26.335 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:26.335 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:26.335 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:26.335 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:15:26.335 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:26.335 11:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:26.335 [2024-12-06 11:58:24.064467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.335 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:26.335 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:26.335 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:26.335 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:26.335 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:26.335 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:26.335 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:15:26.335 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:15:26.335 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:26.335 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.335 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:26.335 [2024-12-06 11:58:24.088494] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:26.594 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.594 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:26.594 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.594 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.594 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.594 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.594 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:26.594 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.594 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.594 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.594 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.594 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.594 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.594 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.594 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:26.594 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.594 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.594 "name": "raid_bdev1", 00:15:26.594 "uuid": "843cf22b-cf36-4d46-9693-0fbde8629cf9", 00:15:26.594 "strip_size_kb": 0, 00:15:26.594 "state": "online", 00:15:26.594 "raid_level": "raid1", 00:15:26.594 "superblock": true, 00:15:26.594 "num_base_bdevs": 2, 00:15:26.594 "num_base_bdevs_discovered": 1, 00:15:26.594 "num_base_bdevs_operational": 1, 00:15:26.594 "base_bdevs_list": [ 00:15:26.594 { 00:15:26.594 "name": null, 00:15:26.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.594 "is_configured": false, 00:15:26.594 "data_offset": 0, 00:15:26.594 "data_size": 7936 00:15:26.594 }, 00:15:26.594 { 00:15:26.594 "name": "BaseBdev2", 00:15:26.594 "uuid": "188357c6-efd9-578a-abdb-a0b31e7cdadf", 00:15:26.594 "is_configured": true, 00:15:26.594 "data_offset": 256, 00:15:26.594 "data_size": 7936 00:15:26.594 } 00:15:26.594 ] 00:15:26.594 }' 00:15:26.594 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.594 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:26.854 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:26.854 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.854 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:26.854 [2024-12-06 11:58:24.511779] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:26.854 [2024-12-06 11:58:24.513533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c960 00:15:26.854 [2024-12-06 11:58:24.515310] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:26.854 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.854 11:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:27.795 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.795 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.795 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.795 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.795 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.795 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.795 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.795 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.795 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.054 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.054 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.054 "name": "raid_bdev1", 00:15:28.054 "uuid": "843cf22b-cf36-4d46-9693-0fbde8629cf9", 00:15:28.054 "strip_size_kb": 0, 00:15:28.054 "state": "online", 00:15:28.054 "raid_level": "raid1", 00:15:28.054 "superblock": true, 00:15:28.054 "num_base_bdevs": 2, 00:15:28.054 "num_base_bdevs_discovered": 2, 00:15:28.054 "num_base_bdevs_operational": 2, 00:15:28.054 "process": { 00:15:28.054 "type": "rebuild", 00:15:28.054 "target": "spare", 00:15:28.054 "progress": { 00:15:28.054 "blocks": 2560, 00:15:28.054 "percent": 32 00:15:28.054 } 00:15:28.054 }, 00:15:28.054 "base_bdevs_list": [ 00:15:28.054 { 00:15:28.054 "name": "spare", 00:15:28.054 "uuid": "c88c2e0d-a350-51ef-8289-41e46f748c10", 00:15:28.054 "is_configured": true, 00:15:28.054 "data_offset": 256, 00:15:28.054 "data_size": 7936 00:15:28.054 }, 00:15:28.054 { 00:15:28.054 "name": "BaseBdev2", 00:15:28.054 "uuid": "188357c6-efd9-578a-abdb-a0b31e7cdadf", 00:15:28.054 "is_configured": true, 00:15:28.054 "data_offset": 256, 00:15:28.054 "data_size": 7936 00:15:28.054 } 00:15:28.054 ] 00:15:28.054 }' 00:15:28.054 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.054 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:28.054 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.054 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:28.054 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:28.054 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.054 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.054 [2024-12-06 11:58:25.682940] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:28.054 [2024-12-06 11:58:25.719857] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:28.054 [2024-12-06 11:58:25.719916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.054 [2024-12-06 11:58:25.719935] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:28.054 [2024-12-06 11:58:25.719942] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:28.054 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.054 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:28.054 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.054 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.054 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.054 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.054 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:28.054 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.054 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.055 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.055 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.055 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.055 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.055 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.055 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.055 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.055 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.055 "name": "raid_bdev1", 00:15:28.055 "uuid": "843cf22b-cf36-4d46-9693-0fbde8629cf9", 00:15:28.055 "strip_size_kb": 0, 00:15:28.055 "state": "online", 00:15:28.055 "raid_level": "raid1", 00:15:28.055 "superblock": true, 00:15:28.055 "num_base_bdevs": 2, 00:15:28.055 "num_base_bdevs_discovered": 1, 00:15:28.055 "num_base_bdevs_operational": 1, 00:15:28.055 "base_bdevs_list": [ 00:15:28.055 { 00:15:28.055 "name": null, 00:15:28.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.055 "is_configured": false, 00:15:28.055 "data_offset": 0, 00:15:28.055 "data_size": 7936 00:15:28.055 }, 00:15:28.055 { 00:15:28.055 "name": "BaseBdev2", 00:15:28.055 "uuid": "188357c6-efd9-578a-abdb-a0b31e7cdadf", 00:15:28.055 "is_configured": true, 00:15:28.055 "data_offset": 256, 00:15:28.055 "data_size": 7936 00:15:28.055 } 00:15:28.055 ] 00:15:28.055 }' 00:15:28.055 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.055 11:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.624 11:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:28.624 11:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.624 11:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:28.624 11:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:28.624 11:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.624 11:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.624 11:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.624 11:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.624 11:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.624 11:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.624 11:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.624 "name": "raid_bdev1", 00:15:28.624 "uuid": "843cf22b-cf36-4d46-9693-0fbde8629cf9", 00:15:28.624 "strip_size_kb": 0, 00:15:28.624 "state": "online", 00:15:28.624 "raid_level": "raid1", 00:15:28.624 "superblock": true, 00:15:28.624 "num_base_bdevs": 2, 00:15:28.624 "num_base_bdevs_discovered": 1, 00:15:28.624 "num_base_bdevs_operational": 1, 00:15:28.624 "base_bdevs_list": [ 00:15:28.624 { 00:15:28.624 "name": null, 00:15:28.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.624 "is_configured": false, 00:15:28.624 "data_offset": 0, 00:15:28.624 "data_size": 7936 00:15:28.624 }, 00:15:28.624 { 00:15:28.624 "name": "BaseBdev2", 00:15:28.624 "uuid": "188357c6-efd9-578a-abdb-a0b31e7cdadf", 00:15:28.624 "is_configured": true, 00:15:28.624 "data_offset": 256, 00:15:28.624 "data_size": 7936 00:15:28.624 } 00:15:28.624 ] 00:15:28.624 }' 00:15:28.624 11:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.624 11:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:28.624 11:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.624 11:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:28.624 11:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:28.624 11:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.624 11:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.624 [2024-12-06 11:58:26.309868] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:28.624 [2024-12-06 11:58:26.311397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ca30 00:15:28.624 [2024-12-06 11:58:26.313192] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:28.624 11:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.624 11:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.005 "name": "raid_bdev1", 00:15:30.005 "uuid": "843cf22b-cf36-4d46-9693-0fbde8629cf9", 00:15:30.005 "strip_size_kb": 0, 00:15:30.005 "state": "online", 00:15:30.005 "raid_level": "raid1", 00:15:30.005 "superblock": true, 00:15:30.005 "num_base_bdevs": 2, 00:15:30.005 "num_base_bdevs_discovered": 2, 00:15:30.005 "num_base_bdevs_operational": 2, 00:15:30.005 "process": { 00:15:30.005 "type": "rebuild", 00:15:30.005 "target": "spare", 00:15:30.005 "progress": { 00:15:30.005 "blocks": 2560, 00:15:30.005 "percent": 32 00:15:30.005 } 00:15:30.005 }, 00:15:30.005 "base_bdevs_list": [ 00:15:30.005 { 00:15:30.005 "name": "spare", 00:15:30.005 "uuid": "c88c2e0d-a350-51ef-8289-41e46f748c10", 00:15:30.005 "is_configured": true, 00:15:30.005 "data_offset": 256, 00:15:30.005 "data_size": 7936 00:15:30.005 }, 00:15:30.005 { 00:15:30.005 "name": "BaseBdev2", 00:15:30.005 "uuid": "188357c6-efd9-578a-abdb-a0b31e7cdadf", 00:15:30.005 "is_configured": true, 00:15:30.005 "data_offset": 256, 00:15:30.005 "data_size": 7936 00:15:30.005 } 00:15:30.005 ] 00:15:30.005 }' 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:30.005 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=581 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.005 "name": "raid_bdev1", 00:15:30.005 "uuid": "843cf22b-cf36-4d46-9693-0fbde8629cf9", 00:15:30.005 "strip_size_kb": 0, 00:15:30.005 "state": "online", 00:15:30.005 "raid_level": "raid1", 00:15:30.005 "superblock": true, 00:15:30.005 "num_base_bdevs": 2, 00:15:30.005 "num_base_bdevs_discovered": 2, 00:15:30.005 "num_base_bdevs_operational": 2, 00:15:30.005 "process": { 00:15:30.005 "type": "rebuild", 00:15:30.005 "target": "spare", 00:15:30.005 "progress": { 00:15:30.005 "blocks": 2816, 00:15:30.005 "percent": 35 00:15:30.005 } 00:15:30.005 }, 00:15:30.005 "base_bdevs_list": [ 00:15:30.005 { 00:15:30.005 "name": "spare", 00:15:30.005 "uuid": "c88c2e0d-a350-51ef-8289-41e46f748c10", 00:15:30.005 "is_configured": true, 00:15:30.005 "data_offset": 256, 00:15:30.005 "data_size": 7936 00:15:30.005 }, 00:15:30.005 { 00:15:30.005 "name": "BaseBdev2", 00:15:30.005 "uuid": "188357c6-efd9-578a-abdb-a0b31e7cdadf", 00:15:30.005 "is_configured": true, 00:15:30.005 "data_offset": 256, 00:15:30.005 "data_size": 7936 00:15:30.005 } 00:15:30.005 ] 00:15:30.005 }' 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.005 11:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:30.945 11:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:30.945 11:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.945 11:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.945 11:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.945 11:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.945 11:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.945 11:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.945 11:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.945 11:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.945 11:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:30.945 11:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.945 11:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.945 "name": "raid_bdev1", 00:15:30.945 "uuid": "843cf22b-cf36-4d46-9693-0fbde8629cf9", 00:15:30.945 "strip_size_kb": 0, 00:15:30.945 "state": "online", 00:15:30.945 "raid_level": "raid1", 00:15:30.945 "superblock": true, 00:15:30.945 "num_base_bdevs": 2, 00:15:30.945 "num_base_bdevs_discovered": 2, 00:15:30.945 "num_base_bdevs_operational": 2, 00:15:30.945 "process": { 00:15:30.945 "type": "rebuild", 00:15:30.945 "target": "spare", 00:15:30.945 "progress": { 00:15:30.945 "blocks": 5888, 00:15:30.945 "percent": 74 00:15:30.945 } 00:15:30.945 }, 00:15:30.945 "base_bdevs_list": [ 00:15:30.945 { 00:15:30.945 "name": "spare", 00:15:30.945 "uuid": "c88c2e0d-a350-51ef-8289-41e46f748c10", 00:15:30.945 "is_configured": true, 00:15:30.945 "data_offset": 256, 00:15:30.945 "data_size": 7936 00:15:30.945 }, 00:15:30.945 { 00:15:30.945 "name": "BaseBdev2", 00:15:30.945 "uuid": "188357c6-efd9-578a-abdb-a0b31e7cdadf", 00:15:30.945 "is_configured": true, 00:15:30.945 "data_offset": 256, 00:15:30.945 "data_size": 7936 00:15:30.945 } 00:15:30.945 ] 00:15:30.945 }' 00:15:30.945 11:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.206 11:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:31.206 11:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.206 11:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:31.206 11:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:31.775 [2024-12-06 11:58:29.423709] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:31.775 [2024-12-06 11:58:29.423779] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:31.775 [2024-12-06 11:58:29.424373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.035 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:32.035 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.035 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.035 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.035 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.035 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.035 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.035 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.035 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.035 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.296 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.296 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.296 "name": "raid_bdev1", 00:15:32.296 "uuid": "843cf22b-cf36-4d46-9693-0fbde8629cf9", 00:15:32.296 "strip_size_kb": 0, 00:15:32.296 "state": "online", 00:15:32.296 "raid_level": "raid1", 00:15:32.296 "superblock": true, 00:15:32.296 "num_base_bdevs": 2, 00:15:32.296 "num_base_bdevs_discovered": 2, 00:15:32.296 "num_base_bdevs_operational": 2, 00:15:32.296 "base_bdevs_list": [ 00:15:32.296 { 00:15:32.296 "name": "spare", 00:15:32.296 "uuid": "c88c2e0d-a350-51ef-8289-41e46f748c10", 00:15:32.296 "is_configured": true, 00:15:32.296 "data_offset": 256, 00:15:32.296 "data_size": 7936 00:15:32.296 }, 00:15:32.296 { 00:15:32.296 "name": "BaseBdev2", 00:15:32.296 "uuid": "188357c6-efd9-578a-abdb-a0b31e7cdadf", 00:15:32.296 "is_configured": true, 00:15:32.296 "data_offset": 256, 00:15:32.296 "data_size": 7936 00:15:32.296 } 00:15:32.296 ] 00:15:32.296 }' 00:15:32.296 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.296 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:32.296 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.296 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:32.296 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:15:32.296 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:32.296 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.296 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:32.296 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:32.296 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.296 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.296 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.296 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.296 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.296 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.296 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.296 "name": "raid_bdev1", 00:15:32.296 "uuid": "843cf22b-cf36-4d46-9693-0fbde8629cf9", 00:15:32.296 "strip_size_kb": 0, 00:15:32.296 "state": "online", 00:15:32.296 "raid_level": "raid1", 00:15:32.296 "superblock": true, 00:15:32.296 "num_base_bdevs": 2, 00:15:32.296 "num_base_bdevs_discovered": 2, 00:15:32.296 "num_base_bdevs_operational": 2, 00:15:32.296 "base_bdevs_list": [ 00:15:32.296 { 00:15:32.296 "name": "spare", 00:15:32.296 "uuid": "c88c2e0d-a350-51ef-8289-41e46f748c10", 00:15:32.296 "is_configured": true, 00:15:32.296 "data_offset": 256, 00:15:32.296 "data_size": 7936 00:15:32.296 }, 00:15:32.296 { 00:15:32.296 "name": "BaseBdev2", 00:15:32.296 "uuid": "188357c6-efd9-578a-abdb-a0b31e7cdadf", 00:15:32.296 "is_configured": true, 00:15:32.296 "data_offset": 256, 00:15:32.296 "data_size": 7936 00:15:32.296 } 00:15:32.296 ] 00:15:32.296 }' 00:15:32.296 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.296 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:32.296 11:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.296 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:32.296 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:32.297 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.297 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.297 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.297 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.297 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:32.297 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.297 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.297 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.297 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.297 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.297 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.297 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.297 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.297 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.556 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.557 "name": "raid_bdev1", 00:15:32.557 "uuid": "843cf22b-cf36-4d46-9693-0fbde8629cf9", 00:15:32.557 "strip_size_kb": 0, 00:15:32.557 "state": "online", 00:15:32.557 "raid_level": "raid1", 00:15:32.557 "superblock": true, 00:15:32.557 "num_base_bdevs": 2, 00:15:32.557 "num_base_bdevs_discovered": 2, 00:15:32.557 "num_base_bdevs_operational": 2, 00:15:32.557 "base_bdevs_list": [ 00:15:32.557 { 00:15:32.557 "name": "spare", 00:15:32.557 "uuid": "c88c2e0d-a350-51ef-8289-41e46f748c10", 00:15:32.557 "is_configured": true, 00:15:32.557 "data_offset": 256, 00:15:32.557 "data_size": 7936 00:15:32.557 }, 00:15:32.557 { 00:15:32.557 "name": "BaseBdev2", 00:15:32.557 "uuid": "188357c6-efd9-578a-abdb-a0b31e7cdadf", 00:15:32.557 "is_configured": true, 00:15:32.557 "data_offset": 256, 00:15:32.557 "data_size": 7936 00:15:32.557 } 00:15:32.557 ] 00:15:32.557 }' 00:15:32.557 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.557 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.817 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:32.817 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.817 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.817 [2024-12-06 11:58:30.449069] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:32.817 [2024-12-06 11:58:30.449097] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:32.817 [2024-12-06 11:58:30.449177] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:32.817 [2024-12-06 11:58:30.449280] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:32.817 [2024-12-06 11:58:30.449302] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:32.817 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.817 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.817 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.817 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:15:32.817 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.817 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.817 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:32.817 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:32.817 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:32.817 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:32.817 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:32.817 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:32.817 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:32.817 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:32.817 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:32.817 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:15:32.817 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:32.817 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:32.817 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:33.078 /dev/nbd0 00:15:33.078 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:33.078 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:33.078 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:33.078 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:15:33.078 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:33.078 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:33.078 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:33.078 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:15:33.078 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:33.078 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:33.078 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.078 1+0 records in 00:15:33.078 1+0 records out 00:15:33.078 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0016044 s, 2.6 MB/s 00:15:33.078 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.078 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:15:33.078 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.078 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:33.078 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:15:33.078 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:33.078 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:33.078 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:33.338 /dev/nbd1 00:15:33.338 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:33.338 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:33.338 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:33.338 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:15:33.338 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:33.338 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:33.338 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:33.338 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:15:33.338 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:33.338 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:33.338 11:58:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.338 1+0 records in 00:15:33.338 1+0 records out 00:15:33.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453777 s, 9.0 MB/s 00:15:33.338 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.338 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:15:33.338 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.338 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:33.338 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:15:33.338 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:33.338 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:33.338 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:33.338 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:33.338 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:33.338 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:33.338 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:33.338 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:15:33.338 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:33.339 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:33.598 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:33.598 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:33.598 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:33.598 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:33.599 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:33.599 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:33.599 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:15:33.599 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:15:33.599 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:33.599 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:33.859 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:33.859 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:33.859 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:33.859 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:33.859 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:33.859 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:33.859 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:15:33.859 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:15:33.859 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:33.859 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:33.859 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.859 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.859 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.859 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:33.859 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.859 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.859 [2024-12-06 11:58:31.512341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:33.859 [2024-12-06 11:58:31.512619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.859 [2024-12-06 11:58:31.512701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:33.859 [2024-12-06 11:58:31.512717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.859 [2024-12-06 11:58:31.514613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.859 [2024-12-06 11:58:31.514644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:33.859 [2024-12-06 11:58:31.514690] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:33.859 [2024-12-06 11:58:31.514733] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:33.859 [2024-12-06 11:58:31.514831] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:33.859 spare 00:15:33.859 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.859 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:33.859 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.859 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.859 [2024-12-06 11:58:31.614731] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:15:33.859 [2024-12-06 11:58:31.614762] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:33.859 [2024-12-06 11:58:31.614852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb1b0 00:15:33.859 [2024-12-06 11:58:31.614942] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:15:33.859 [2024-12-06 11:58:31.614952] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:15:33.859 [2024-12-06 11:58:31.615031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.119 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.120 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:34.120 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.120 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.120 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.120 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.120 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:34.120 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.120 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.120 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.120 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.120 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.120 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.120 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.120 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.120 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.120 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.120 "name": "raid_bdev1", 00:15:34.120 "uuid": "843cf22b-cf36-4d46-9693-0fbde8629cf9", 00:15:34.120 "strip_size_kb": 0, 00:15:34.120 "state": "online", 00:15:34.120 "raid_level": "raid1", 00:15:34.120 "superblock": true, 00:15:34.120 "num_base_bdevs": 2, 00:15:34.120 "num_base_bdevs_discovered": 2, 00:15:34.120 "num_base_bdevs_operational": 2, 00:15:34.120 "base_bdevs_list": [ 00:15:34.120 { 00:15:34.120 "name": "spare", 00:15:34.120 "uuid": "c88c2e0d-a350-51ef-8289-41e46f748c10", 00:15:34.120 "is_configured": true, 00:15:34.120 "data_offset": 256, 00:15:34.120 "data_size": 7936 00:15:34.120 }, 00:15:34.120 { 00:15:34.120 "name": "BaseBdev2", 00:15:34.120 "uuid": "188357c6-efd9-578a-abdb-a0b31e7cdadf", 00:15:34.120 "is_configured": true, 00:15:34.120 "data_offset": 256, 00:15:34.120 "data_size": 7936 00:15:34.120 } 00:15:34.120 ] 00:15:34.120 }' 00:15:34.120 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.120 11:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.380 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:34.380 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.380 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:34.380 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:34.380 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.380 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.380 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.380 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.380 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.380 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.640 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.640 "name": "raid_bdev1", 00:15:34.640 "uuid": "843cf22b-cf36-4d46-9693-0fbde8629cf9", 00:15:34.640 "strip_size_kb": 0, 00:15:34.640 "state": "online", 00:15:34.640 "raid_level": "raid1", 00:15:34.640 "superblock": true, 00:15:34.640 "num_base_bdevs": 2, 00:15:34.640 "num_base_bdevs_discovered": 2, 00:15:34.641 "num_base_bdevs_operational": 2, 00:15:34.641 "base_bdevs_list": [ 00:15:34.641 { 00:15:34.641 "name": "spare", 00:15:34.641 "uuid": "c88c2e0d-a350-51ef-8289-41e46f748c10", 00:15:34.641 "is_configured": true, 00:15:34.641 "data_offset": 256, 00:15:34.641 "data_size": 7936 00:15:34.641 }, 00:15:34.641 { 00:15:34.641 "name": "BaseBdev2", 00:15:34.641 "uuid": "188357c6-efd9-578a-abdb-a0b31e7cdadf", 00:15:34.641 "is_configured": true, 00:15:34.641 "data_offset": 256, 00:15:34.641 "data_size": 7936 00:15:34.641 } 00:15:34.641 ] 00:15:34.641 }' 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.641 [2024-12-06 11:58:32.271094] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.641 "name": "raid_bdev1", 00:15:34.641 "uuid": "843cf22b-cf36-4d46-9693-0fbde8629cf9", 00:15:34.641 "strip_size_kb": 0, 00:15:34.641 "state": "online", 00:15:34.641 "raid_level": "raid1", 00:15:34.641 "superblock": true, 00:15:34.641 "num_base_bdevs": 2, 00:15:34.641 "num_base_bdevs_discovered": 1, 00:15:34.641 "num_base_bdevs_operational": 1, 00:15:34.641 "base_bdevs_list": [ 00:15:34.641 { 00:15:34.641 "name": null, 00:15:34.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.641 "is_configured": false, 00:15:34.641 "data_offset": 0, 00:15:34.641 "data_size": 7936 00:15:34.641 }, 00:15:34.641 { 00:15:34.641 "name": "BaseBdev2", 00:15:34.641 "uuid": "188357c6-efd9-578a-abdb-a0b31e7cdadf", 00:15:34.641 "is_configured": true, 00:15:34.641 "data_offset": 256, 00:15:34.641 "data_size": 7936 00:15:34.641 } 00:15:34.641 ] 00:15:34.641 }' 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.641 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:35.211 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:35.211 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.211 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:35.211 [2024-12-06 11:58:32.754268] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:35.211 [2024-12-06 11:58:32.754404] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:35.211 [2024-12-06 11:58:32.754417] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:35.211 [2024-12-06 11:58:32.754449] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:35.211 [2024-12-06 11:58:32.756111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb280 00:15:35.211 [2024-12-06 11:58:32.757954] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:35.211 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.211 11:58:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:36.155 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.155 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.155 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.155 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.155 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.155 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.155 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.155 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.155 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:36.155 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.155 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.155 "name": "raid_bdev1", 00:15:36.155 "uuid": "843cf22b-cf36-4d46-9693-0fbde8629cf9", 00:15:36.155 "strip_size_kb": 0, 00:15:36.155 "state": "online", 00:15:36.155 "raid_level": "raid1", 00:15:36.155 "superblock": true, 00:15:36.155 "num_base_bdevs": 2, 00:15:36.155 "num_base_bdevs_discovered": 2, 00:15:36.155 "num_base_bdevs_operational": 2, 00:15:36.155 "process": { 00:15:36.155 "type": "rebuild", 00:15:36.155 "target": "spare", 00:15:36.155 "progress": { 00:15:36.155 "blocks": 2560, 00:15:36.155 "percent": 32 00:15:36.155 } 00:15:36.156 }, 00:15:36.156 "base_bdevs_list": [ 00:15:36.156 { 00:15:36.156 "name": "spare", 00:15:36.156 "uuid": "c88c2e0d-a350-51ef-8289-41e46f748c10", 00:15:36.156 "is_configured": true, 00:15:36.156 "data_offset": 256, 00:15:36.156 "data_size": 7936 00:15:36.156 }, 00:15:36.156 { 00:15:36.156 "name": "BaseBdev2", 00:15:36.156 "uuid": "188357c6-efd9-578a-abdb-a0b31e7cdadf", 00:15:36.156 "is_configured": true, 00:15:36.156 "data_offset": 256, 00:15:36.156 "data_size": 7936 00:15:36.156 } 00:15:36.156 ] 00:15:36.156 }' 00:15:36.156 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.156 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.156 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.416 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.416 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:36.416 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.416 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:36.416 [2024-12-06 11:58:33.921179] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:36.416 [2024-12-06 11:58:33.961949] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:36.416 [2024-12-06 11:58:33.961999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.416 [2024-12-06 11:58:33.962015] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:36.416 [2024-12-06 11:58:33.962022] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:36.416 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.416 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:36.416 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.416 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.416 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:36.416 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:36.416 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:36.416 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.416 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.416 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.416 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.416 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.416 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.416 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.416 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:36.416 11:58:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.416 11:58:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.416 "name": "raid_bdev1", 00:15:36.416 "uuid": "843cf22b-cf36-4d46-9693-0fbde8629cf9", 00:15:36.416 "strip_size_kb": 0, 00:15:36.416 "state": "online", 00:15:36.416 "raid_level": "raid1", 00:15:36.416 "superblock": true, 00:15:36.416 "num_base_bdevs": 2, 00:15:36.416 "num_base_bdevs_discovered": 1, 00:15:36.416 "num_base_bdevs_operational": 1, 00:15:36.416 "base_bdevs_list": [ 00:15:36.416 { 00:15:36.416 "name": null, 00:15:36.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.416 "is_configured": false, 00:15:36.416 "data_offset": 0, 00:15:36.416 "data_size": 7936 00:15:36.416 }, 00:15:36.416 { 00:15:36.416 "name": "BaseBdev2", 00:15:36.416 "uuid": "188357c6-efd9-578a-abdb-a0b31e7cdadf", 00:15:36.416 "is_configured": true, 00:15:36.416 "data_offset": 256, 00:15:36.416 "data_size": 7936 00:15:36.416 } 00:15:36.416 ] 00:15:36.416 }' 00:15:36.416 11:58:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.416 11:58:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:36.985 11:58:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:36.985 11:58:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.985 11:58:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:36.985 [2024-12-06 11:58:34.436928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:36.985 [2024-12-06 11:58:34.436980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.985 [2024-12-06 11:58:34.437006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:36.985 [2024-12-06 11:58:34.437015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.985 [2024-12-06 11:58:34.437205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.985 [2024-12-06 11:58:34.437224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:36.985 [2024-12-06 11:58:34.437306] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:36.985 [2024-12-06 11:58:34.437317] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:36.985 [2024-12-06 11:58:34.437340] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:36.985 [2024-12-06 11:58:34.437361] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:36.985 [2024-12-06 11:58:34.438887] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb350 00:15:36.985 [2024-12-06 11:58:34.440729] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:36.985 spare 00:15:36.985 11:58:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.985 11:58:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:37.921 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.921 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.921 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.921 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.921 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.921 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.921 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.921 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.921 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.921 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.921 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.921 "name": "raid_bdev1", 00:15:37.921 "uuid": "843cf22b-cf36-4d46-9693-0fbde8629cf9", 00:15:37.921 "strip_size_kb": 0, 00:15:37.921 "state": "online", 00:15:37.921 "raid_level": "raid1", 00:15:37.921 "superblock": true, 00:15:37.921 "num_base_bdevs": 2, 00:15:37.921 "num_base_bdevs_discovered": 2, 00:15:37.921 "num_base_bdevs_operational": 2, 00:15:37.921 "process": { 00:15:37.921 "type": "rebuild", 00:15:37.921 "target": "spare", 00:15:37.921 "progress": { 00:15:37.921 "blocks": 2560, 00:15:37.921 "percent": 32 00:15:37.921 } 00:15:37.921 }, 00:15:37.921 "base_bdevs_list": [ 00:15:37.921 { 00:15:37.921 "name": "spare", 00:15:37.921 "uuid": "c88c2e0d-a350-51ef-8289-41e46f748c10", 00:15:37.921 "is_configured": true, 00:15:37.921 "data_offset": 256, 00:15:37.921 "data_size": 7936 00:15:37.922 }, 00:15:37.922 { 00:15:37.922 "name": "BaseBdev2", 00:15:37.922 "uuid": "188357c6-efd9-578a-abdb-a0b31e7cdadf", 00:15:37.922 "is_configured": true, 00:15:37.922 "data_offset": 256, 00:15:37.922 "data_size": 7936 00:15:37.922 } 00:15:37.922 ] 00:15:37.922 }' 00:15:37.922 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.922 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.922 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.922 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.922 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:37.922 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.922 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.922 [2024-12-06 11:58:35.599909] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:37.922 [2024-12-06 11:58:35.644711] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:37.922 [2024-12-06 11:58:35.644769] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.922 [2024-12-06 11:58:35.644782] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:37.922 [2024-12-06 11:58:35.644792] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:37.922 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.922 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:37.922 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.922 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.922 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:37.922 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:37.922 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:37.922 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.922 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.922 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.922 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.922 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.922 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.922 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.922 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.922 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.180 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.180 "name": "raid_bdev1", 00:15:38.180 "uuid": "843cf22b-cf36-4d46-9693-0fbde8629cf9", 00:15:38.180 "strip_size_kb": 0, 00:15:38.180 "state": "online", 00:15:38.180 "raid_level": "raid1", 00:15:38.180 "superblock": true, 00:15:38.180 "num_base_bdevs": 2, 00:15:38.180 "num_base_bdevs_discovered": 1, 00:15:38.180 "num_base_bdevs_operational": 1, 00:15:38.180 "base_bdevs_list": [ 00:15:38.180 { 00:15:38.180 "name": null, 00:15:38.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.180 "is_configured": false, 00:15:38.180 "data_offset": 0, 00:15:38.180 "data_size": 7936 00:15:38.180 }, 00:15:38.180 { 00:15:38.180 "name": "BaseBdev2", 00:15:38.180 "uuid": "188357c6-efd9-578a-abdb-a0b31e7cdadf", 00:15:38.180 "is_configured": true, 00:15:38.180 "data_offset": 256, 00:15:38.180 "data_size": 7936 00:15:38.180 } 00:15:38.180 ] 00:15:38.180 }' 00:15:38.180 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.180 11:58:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:38.438 11:58:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:38.438 11:58:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.438 11:58:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:38.438 11:58:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:38.438 11:58:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.438 11:58:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.438 11:58:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.438 11:58:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.438 11:58:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:38.438 11:58:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.438 11:58:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.438 "name": "raid_bdev1", 00:15:38.438 "uuid": "843cf22b-cf36-4d46-9693-0fbde8629cf9", 00:15:38.438 "strip_size_kb": 0, 00:15:38.438 "state": "online", 00:15:38.438 "raid_level": "raid1", 00:15:38.438 "superblock": true, 00:15:38.438 "num_base_bdevs": 2, 00:15:38.438 "num_base_bdevs_discovered": 1, 00:15:38.438 "num_base_bdevs_operational": 1, 00:15:38.438 "base_bdevs_list": [ 00:15:38.438 { 00:15:38.438 "name": null, 00:15:38.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.438 "is_configured": false, 00:15:38.438 "data_offset": 0, 00:15:38.438 "data_size": 7936 00:15:38.438 }, 00:15:38.438 { 00:15:38.438 "name": "BaseBdev2", 00:15:38.438 "uuid": "188357c6-efd9-578a-abdb-a0b31e7cdadf", 00:15:38.438 "is_configured": true, 00:15:38.438 "data_offset": 256, 00:15:38.438 "data_size": 7936 00:15:38.438 } 00:15:38.438 ] 00:15:38.438 }' 00:15:38.438 11:58:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.697 11:58:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:38.697 11:58:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.697 11:58:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:38.697 11:58:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:38.697 11:58:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.697 11:58:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:38.697 11:58:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.697 11:58:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:38.697 11:58:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.697 11:58:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:38.697 [2024-12-06 11:58:36.290766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:38.697 [2024-12-06 11:58:36.290812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.697 [2024-12-06 11:58:36.290829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:38.697 [2024-12-06 11:58:36.290841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.697 [2024-12-06 11:58:36.291006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.697 [2024-12-06 11:58:36.291023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:38.697 [2024-12-06 11:58:36.291063] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:38.697 [2024-12-06 11:58:36.291079] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:38.697 [2024-12-06 11:58:36.291096] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:38.697 [2024-12-06 11:58:36.291116] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:38.697 BaseBdev1 00:15:38.697 11:58:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.697 11:58:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:39.633 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:39.633 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.633 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.633 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:39.633 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:39.633 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:39.633 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.633 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.633 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.633 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.633 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.633 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.633 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.633 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:39.633 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.633 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.633 "name": "raid_bdev1", 00:15:39.633 "uuid": "843cf22b-cf36-4d46-9693-0fbde8629cf9", 00:15:39.633 "strip_size_kb": 0, 00:15:39.633 "state": "online", 00:15:39.633 "raid_level": "raid1", 00:15:39.633 "superblock": true, 00:15:39.633 "num_base_bdevs": 2, 00:15:39.633 "num_base_bdevs_discovered": 1, 00:15:39.633 "num_base_bdevs_operational": 1, 00:15:39.633 "base_bdevs_list": [ 00:15:39.633 { 00:15:39.633 "name": null, 00:15:39.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.633 "is_configured": false, 00:15:39.633 "data_offset": 0, 00:15:39.633 "data_size": 7936 00:15:39.633 }, 00:15:39.633 { 00:15:39.633 "name": "BaseBdev2", 00:15:39.633 "uuid": "188357c6-efd9-578a-abdb-a0b31e7cdadf", 00:15:39.633 "is_configured": true, 00:15:39.633 "data_offset": 256, 00:15:39.633 "data_size": 7936 00:15:39.633 } 00:15:39.633 ] 00:15:39.633 }' 00:15:39.633 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.633 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.200 "name": "raid_bdev1", 00:15:40.200 "uuid": "843cf22b-cf36-4d46-9693-0fbde8629cf9", 00:15:40.200 "strip_size_kb": 0, 00:15:40.200 "state": "online", 00:15:40.200 "raid_level": "raid1", 00:15:40.200 "superblock": true, 00:15:40.200 "num_base_bdevs": 2, 00:15:40.200 "num_base_bdevs_discovered": 1, 00:15:40.200 "num_base_bdevs_operational": 1, 00:15:40.200 "base_bdevs_list": [ 00:15:40.200 { 00:15:40.200 "name": null, 00:15:40.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.200 "is_configured": false, 00:15:40.200 "data_offset": 0, 00:15:40.200 "data_size": 7936 00:15:40.200 }, 00:15:40.200 { 00:15:40.200 "name": "BaseBdev2", 00:15:40.200 "uuid": "188357c6-efd9-578a-abdb-a0b31e7cdadf", 00:15:40.200 "is_configured": true, 00:15:40.200 "data_offset": 256, 00:15:40.200 "data_size": 7936 00:15:40.200 } 00:15:40.200 ] 00:15:40.200 }' 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:40.200 [2024-12-06 11:58:37.836077] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:40.200 [2024-12-06 11:58:37.836205] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:40.200 [2024-12-06 11:58:37.836216] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:40.200 request: 00:15:40.200 { 00:15:40.200 "base_bdev": "BaseBdev1", 00:15:40.200 "raid_bdev": "raid_bdev1", 00:15:40.200 "method": "bdev_raid_add_base_bdev", 00:15:40.200 "req_id": 1 00:15:40.200 } 00:15:40.200 Got JSON-RPC error response 00:15:40.200 response: 00:15:40.200 { 00:15:40.200 "code": -22, 00:15:40.200 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:40.200 } 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:40.200 11:58:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:41.140 11:58:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:41.140 11:58:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.140 11:58:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.140 11:58:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.140 11:58:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.140 11:58:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:41.140 11:58:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.140 11:58:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.140 11:58:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.140 11:58:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.140 11:58:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.140 11:58:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.140 11:58:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.140 11:58:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.140 11:58:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.400 11:58:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.400 "name": "raid_bdev1", 00:15:41.400 "uuid": "843cf22b-cf36-4d46-9693-0fbde8629cf9", 00:15:41.400 "strip_size_kb": 0, 00:15:41.400 "state": "online", 00:15:41.400 "raid_level": "raid1", 00:15:41.400 "superblock": true, 00:15:41.400 "num_base_bdevs": 2, 00:15:41.400 "num_base_bdevs_discovered": 1, 00:15:41.400 "num_base_bdevs_operational": 1, 00:15:41.400 "base_bdevs_list": [ 00:15:41.400 { 00:15:41.400 "name": null, 00:15:41.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.400 "is_configured": false, 00:15:41.400 "data_offset": 0, 00:15:41.400 "data_size": 7936 00:15:41.400 }, 00:15:41.400 { 00:15:41.400 "name": "BaseBdev2", 00:15:41.400 "uuid": "188357c6-efd9-578a-abdb-a0b31e7cdadf", 00:15:41.400 "is_configured": true, 00:15:41.400 "data_offset": 256, 00:15:41.400 "data_size": 7936 00:15:41.400 } 00:15:41.400 ] 00:15:41.400 }' 00:15:41.400 11:58:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.400 11:58:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.660 11:58:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:41.660 11:58:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.660 11:58:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:41.660 11:58:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:41.660 11:58:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.660 11:58:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.660 11:58:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.660 11:58:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.660 11:58:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.660 11:58:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.660 11:58:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.660 "name": "raid_bdev1", 00:15:41.660 "uuid": "843cf22b-cf36-4d46-9693-0fbde8629cf9", 00:15:41.660 "strip_size_kb": 0, 00:15:41.660 "state": "online", 00:15:41.660 "raid_level": "raid1", 00:15:41.660 "superblock": true, 00:15:41.660 "num_base_bdevs": 2, 00:15:41.660 "num_base_bdevs_discovered": 1, 00:15:41.660 "num_base_bdevs_operational": 1, 00:15:41.660 "base_bdevs_list": [ 00:15:41.660 { 00:15:41.660 "name": null, 00:15:41.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.660 "is_configured": false, 00:15:41.660 "data_offset": 0, 00:15:41.660 "data_size": 7936 00:15:41.660 }, 00:15:41.660 { 00:15:41.660 "name": "BaseBdev2", 00:15:41.660 "uuid": "188357c6-efd9-578a-abdb-a0b31e7cdadf", 00:15:41.660 "is_configured": true, 00:15:41.660 "data_offset": 256, 00:15:41.660 "data_size": 7936 00:15:41.660 } 00:15:41.660 ] 00:15:41.660 }' 00:15:41.660 11:58:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.660 11:58:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:41.660 11:58:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.920 11:58:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:41.920 11:58:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 97678 00:15:41.920 11:58:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97678 ']' 00:15:41.920 11:58:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 97678 00:15:41.920 11:58:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:15:41.920 11:58:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:41.920 11:58:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97678 00:15:41.920 11:58:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:41.920 11:58:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:41.920 11:58:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97678' 00:15:41.920 killing process with pid 97678 00:15:41.920 11:58:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 97678 00:15:41.920 Received shutdown signal, test time was about 60.000000 seconds 00:15:41.920 00:15:41.921 Latency(us) 00:15:41.921 [2024-12-06T11:58:39.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.921 [2024-12-06T11:58:39.679Z] =================================================================================================================== 00:15:41.921 [2024-12-06T11:58:39.679Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:41.921 [2024-12-06 11:58:39.467121] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:41.921 [2024-12-06 11:58:39.467245] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.921 [2024-12-06 11:58:39.467297] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:41.921 [2024-12-06 11:58:39.467306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:15:41.921 11:58:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 97678 00:15:41.921 [2024-12-06 11:58:39.501529] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:42.181 11:58:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:15:42.181 00:15:42.181 real 0m18.236s 00:15:42.181 user 0m24.282s 00:15:42.181 sys 0m2.590s 00:15:42.181 11:58:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:42.181 11:58:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:42.181 ************************************ 00:15:42.181 END TEST raid_rebuild_test_sb_md_separate 00:15:42.181 ************************************ 00:15:42.181 11:58:39 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:15:42.181 11:58:39 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:15:42.181 11:58:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:42.181 11:58:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:42.181 11:58:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:42.181 ************************************ 00:15:42.181 START TEST raid_state_function_test_sb_md_interleaved 00:15:42.181 ************************************ 00:15:42.181 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:15:42.181 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:42.181 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:42.181 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:42.181 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:42.181 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:42.181 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:42.181 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:42.181 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:42.181 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:42.181 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:42.181 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:42.181 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:42.181 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:42.181 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:42.181 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:42.181 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:42.181 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:42.181 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:42.182 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:42.182 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:42.182 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:42.182 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:42.182 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=98358 00:15:42.182 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:42.182 Process raid pid: 98358 00:15:42.182 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 98358' 00:15:42.182 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 98358 00:15:42.182 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 98358 ']' 00:15:42.182 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.182 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:42.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.182 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.182 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:42.182 11:58:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:42.182 [2024-12-06 11:58:39.901363] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:42.182 [2024-12-06 11:58:39.901485] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.442 [2024-12-06 11:58:40.048151] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.442 [2024-12-06 11:58:40.094284] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.442 [2024-12-06 11:58:40.136788] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:42.442 [2024-12-06 11:58:40.136833] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:43.013 11:58:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:43.013 11:58:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:15:43.013 11:58:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:43.013 11:58:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.013 11:58:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:43.013 [2024-12-06 11:58:40.698150] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:43.013 [2024-12-06 11:58:40.698202] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:43.013 [2024-12-06 11:58:40.698214] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:43.013 [2024-12-06 11:58:40.698223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:43.013 11:58:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.013 11:58:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:43.013 11:58:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.013 11:58:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.013 11:58:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:43.013 11:58:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:43.013 11:58:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:43.013 11:58:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.013 11:58:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.013 11:58:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.013 11:58:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.013 11:58:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.013 11:58:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.013 11:58:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.013 11:58:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:43.013 11:58:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.013 11:58:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.013 "name": "Existed_Raid", 00:15:43.013 "uuid": "f21836d3-f002-4ce6-b1d6-ae8f05eaa7ee", 00:15:43.013 "strip_size_kb": 0, 00:15:43.013 "state": "configuring", 00:15:43.013 "raid_level": "raid1", 00:15:43.013 "superblock": true, 00:15:43.013 "num_base_bdevs": 2, 00:15:43.013 "num_base_bdevs_discovered": 0, 00:15:43.013 "num_base_bdevs_operational": 2, 00:15:43.013 "base_bdevs_list": [ 00:15:43.013 { 00:15:43.013 "name": "BaseBdev1", 00:15:43.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.013 "is_configured": false, 00:15:43.013 "data_offset": 0, 00:15:43.013 "data_size": 0 00:15:43.013 }, 00:15:43.013 { 00:15:43.013 "name": "BaseBdev2", 00:15:43.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.013 "is_configured": false, 00:15:43.013 "data_offset": 0, 00:15:43.013 "data_size": 0 00:15:43.013 } 00:15:43.013 ] 00:15:43.013 }' 00:15:43.013 11:58:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.013 11:58:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:43.615 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:43.615 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.615 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:43.615 [2024-12-06 11:58:41.185257] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:43.615 [2024-12-06 11:58:41.185294] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:15:43.615 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.615 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:43.615 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.615 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:43.615 [2024-12-06 11:58:41.197244] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:43.615 [2024-12-06 11:58:41.197282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:43.615 [2024-12-06 11:58:41.197298] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:43.615 [2024-12-06 11:58:41.197308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:43.615 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.615 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:15:43.615 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.615 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:43.615 [2024-12-06 11:58:41.218083] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:43.615 BaseBdev1 00:15:43.615 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.615 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:43.615 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:43.615 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:43.616 [ 00:15:43.616 { 00:15:43.616 "name": "BaseBdev1", 00:15:43.616 "aliases": [ 00:15:43.616 "2a73c8ef-4027-4104-8bac-5b3edbcc688c" 00:15:43.616 ], 00:15:43.616 "product_name": "Malloc disk", 00:15:43.616 "block_size": 4128, 00:15:43.616 "num_blocks": 8192, 00:15:43.616 "uuid": "2a73c8ef-4027-4104-8bac-5b3edbcc688c", 00:15:43.616 "md_size": 32, 00:15:43.616 "md_interleave": true, 00:15:43.616 "dif_type": 0, 00:15:43.616 "assigned_rate_limits": { 00:15:43.616 "rw_ios_per_sec": 0, 00:15:43.616 "rw_mbytes_per_sec": 0, 00:15:43.616 "r_mbytes_per_sec": 0, 00:15:43.616 "w_mbytes_per_sec": 0 00:15:43.616 }, 00:15:43.616 "claimed": true, 00:15:43.616 "claim_type": "exclusive_write", 00:15:43.616 "zoned": false, 00:15:43.616 "supported_io_types": { 00:15:43.616 "read": true, 00:15:43.616 "write": true, 00:15:43.616 "unmap": true, 00:15:43.616 "flush": true, 00:15:43.616 "reset": true, 00:15:43.616 "nvme_admin": false, 00:15:43.616 "nvme_io": false, 00:15:43.616 "nvme_io_md": false, 00:15:43.616 "write_zeroes": true, 00:15:43.616 "zcopy": true, 00:15:43.616 "get_zone_info": false, 00:15:43.616 "zone_management": false, 00:15:43.616 "zone_append": false, 00:15:43.616 "compare": false, 00:15:43.616 "compare_and_write": false, 00:15:43.616 "abort": true, 00:15:43.616 "seek_hole": false, 00:15:43.616 "seek_data": false, 00:15:43.616 "copy": true, 00:15:43.616 "nvme_iov_md": false 00:15:43.616 }, 00:15:43.616 "memory_domains": [ 00:15:43.616 { 00:15:43.616 "dma_device_id": "system", 00:15:43.616 "dma_device_type": 1 00:15:43.616 }, 00:15:43.616 { 00:15:43.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.616 "dma_device_type": 2 00:15:43.616 } 00:15:43.616 ], 00:15:43.616 "driver_specific": {} 00:15:43.616 } 00:15:43.616 ] 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.616 "name": "Existed_Raid", 00:15:43.616 "uuid": "a12b3de6-0b6d-413c-96a7-e6589411181a", 00:15:43.616 "strip_size_kb": 0, 00:15:43.616 "state": "configuring", 00:15:43.616 "raid_level": "raid1", 00:15:43.616 "superblock": true, 00:15:43.616 "num_base_bdevs": 2, 00:15:43.616 "num_base_bdevs_discovered": 1, 00:15:43.616 "num_base_bdevs_operational": 2, 00:15:43.616 "base_bdevs_list": [ 00:15:43.616 { 00:15:43.616 "name": "BaseBdev1", 00:15:43.616 "uuid": "2a73c8ef-4027-4104-8bac-5b3edbcc688c", 00:15:43.616 "is_configured": true, 00:15:43.616 "data_offset": 256, 00:15:43.616 "data_size": 7936 00:15:43.616 }, 00:15:43.616 { 00:15:43.616 "name": "BaseBdev2", 00:15:43.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.616 "is_configured": false, 00:15:43.616 "data_offset": 0, 00:15:43.616 "data_size": 0 00:15:43.616 } 00:15:43.616 ] 00:15:43.616 }' 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.616 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:44.215 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:44.215 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.215 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:44.215 [2024-12-06 11:58:41.701295] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:44.215 [2024-12-06 11:58:41.701336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:15:44.215 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.215 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:44.215 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.215 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:44.215 [2024-12-06 11:58:41.713311] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:44.215 [2024-12-06 11:58:41.715127] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:44.215 [2024-12-06 11:58:41.715166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:44.215 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.215 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:44.215 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:44.215 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:44.215 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.215 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.215 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:44.215 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:44.215 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:44.215 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.215 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.215 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.215 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.215 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.215 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.215 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.215 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:44.215 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.215 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.215 "name": "Existed_Raid", 00:15:44.215 "uuid": "b163182e-a210-4b9b-9358-17f5b59583b3", 00:15:44.215 "strip_size_kb": 0, 00:15:44.215 "state": "configuring", 00:15:44.215 "raid_level": "raid1", 00:15:44.215 "superblock": true, 00:15:44.215 "num_base_bdevs": 2, 00:15:44.215 "num_base_bdevs_discovered": 1, 00:15:44.215 "num_base_bdevs_operational": 2, 00:15:44.215 "base_bdevs_list": [ 00:15:44.215 { 00:15:44.215 "name": "BaseBdev1", 00:15:44.215 "uuid": "2a73c8ef-4027-4104-8bac-5b3edbcc688c", 00:15:44.215 "is_configured": true, 00:15:44.215 "data_offset": 256, 00:15:44.215 "data_size": 7936 00:15:44.215 }, 00:15:44.215 { 00:15:44.215 "name": "BaseBdev2", 00:15:44.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.215 "is_configured": false, 00:15:44.215 "data_offset": 0, 00:15:44.215 "data_size": 0 00:15:44.215 } 00:15:44.215 ] 00:15:44.215 }' 00:15:44.215 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.215 11:58:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:44.484 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:15:44.484 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.484 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:44.484 [2024-12-06 11:58:42.184953] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:44.485 [2024-12-06 11:58:42.185427] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:44.485 [2024-12-06 11:58:42.185485] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:44.485 BaseBdev2 00:15:44.485 [2024-12-06 11:58:42.185807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.485 [2024-12-06 11:58:42.186030] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:44.485 [2024-12-06 11:58:42.186094] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:44.485 [2024-12-06 11:58:42.186324] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:44.485 [ 00:15:44.485 { 00:15:44.485 "name": "BaseBdev2", 00:15:44.485 "aliases": [ 00:15:44.485 "f9f6073e-92b4-4954-9227-dff18a2f43fd" 00:15:44.485 ], 00:15:44.485 "product_name": "Malloc disk", 00:15:44.485 "block_size": 4128, 00:15:44.485 "num_blocks": 8192, 00:15:44.485 "uuid": "f9f6073e-92b4-4954-9227-dff18a2f43fd", 00:15:44.485 "md_size": 32, 00:15:44.485 "md_interleave": true, 00:15:44.485 "dif_type": 0, 00:15:44.485 "assigned_rate_limits": { 00:15:44.485 "rw_ios_per_sec": 0, 00:15:44.485 "rw_mbytes_per_sec": 0, 00:15:44.485 "r_mbytes_per_sec": 0, 00:15:44.485 "w_mbytes_per_sec": 0 00:15:44.485 }, 00:15:44.485 "claimed": true, 00:15:44.485 "claim_type": "exclusive_write", 00:15:44.485 "zoned": false, 00:15:44.485 "supported_io_types": { 00:15:44.485 "read": true, 00:15:44.485 "write": true, 00:15:44.485 "unmap": true, 00:15:44.485 "flush": true, 00:15:44.485 "reset": true, 00:15:44.485 "nvme_admin": false, 00:15:44.485 "nvme_io": false, 00:15:44.485 "nvme_io_md": false, 00:15:44.485 "write_zeroes": true, 00:15:44.485 "zcopy": true, 00:15:44.485 "get_zone_info": false, 00:15:44.485 "zone_management": false, 00:15:44.485 "zone_append": false, 00:15:44.485 "compare": false, 00:15:44.485 "compare_and_write": false, 00:15:44.485 "abort": true, 00:15:44.485 "seek_hole": false, 00:15:44.485 "seek_data": false, 00:15:44.485 "copy": true, 00:15:44.485 "nvme_iov_md": false 00:15:44.485 }, 00:15:44.485 "memory_domains": [ 00:15:44.485 { 00:15:44.485 "dma_device_id": "system", 00:15:44.485 "dma_device_type": 1 00:15:44.485 }, 00:15:44.485 { 00:15:44.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.485 "dma_device_type": 2 00:15:44.485 } 00:15:44.485 ], 00:15:44.485 "driver_specific": {} 00:15:44.485 } 00:15:44.485 ] 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:44.485 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.745 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.745 "name": "Existed_Raid", 00:15:44.745 "uuid": "b163182e-a210-4b9b-9358-17f5b59583b3", 00:15:44.745 "strip_size_kb": 0, 00:15:44.745 "state": "online", 00:15:44.745 "raid_level": "raid1", 00:15:44.745 "superblock": true, 00:15:44.745 "num_base_bdevs": 2, 00:15:44.745 "num_base_bdevs_discovered": 2, 00:15:44.745 "num_base_bdevs_operational": 2, 00:15:44.745 "base_bdevs_list": [ 00:15:44.745 { 00:15:44.745 "name": "BaseBdev1", 00:15:44.745 "uuid": "2a73c8ef-4027-4104-8bac-5b3edbcc688c", 00:15:44.745 "is_configured": true, 00:15:44.745 "data_offset": 256, 00:15:44.745 "data_size": 7936 00:15:44.745 }, 00:15:44.745 { 00:15:44.745 "name": "BaseBdev2", 00:15:44.745 "uuid": "f9f6073e-92b4-4954-9227-dff18a2f43fd", 00:15:44.745 "is_configured": true, 00:15:44.745 "data_offset": 256, 00:15:44.745 "data_size": 7936 00:15:44.745 } 00:15:44.745 ] 00:15:44.745 }' 00:15:44.745 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.745 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:45.005 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:45.005 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:45.005 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:45.005 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:45.005 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:15:45.005 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:45.005 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:45.005 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.005 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:45.005 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:45.005 [2024-12-06 11:58:42.692340] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:45.005 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.005 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:45.005 "name": "Existed_Raid", 00:15:45.005 "aliases": [ 00:15:45.005 "b163182e-a210-4b9b-9358-17f5b59583b3" 00:15:45.005 ], 00:15:45.005 "product_name": "Raid Volume", 00:15:45.005 "block_size": 4128, 00:15:45.005 "num_blocks": 7936, 00:15:45.005 "uuid": "b163182e-a210-4b9b-9358-17f5b59583b3", 00:15:45.005 "md_size": 32, 00:15:45.005 "md_interleave": true, 00:15:45.005 "dif_type": 0, 00:15:45.005 "assigned_rate_limits": { 00:15:45.005 "rw_ios_per_sec": 0, 00:15:45.005 "rw_mbytes_per_sec": 0, 00:15:45.005 "r_mbytes_per_sec": 0, 00:15:45.005 "w_mbytes_per_sec": 0 00:15:45.005 }, 00:15:45.005 "claimed": false, 00:15:45.005 "zoned": false, 00:15:45.005 "supported_io_types": { 00:15:45.005 "read": true, 00:15:45.005 "write": true, 00:15:45.005 "unmap": false, 00:15:45.005 "flush": false, 00:15:45.005 "reset": true, 00:15:45.005 "nvme_admin": false, 00:15:45.005 "nvme_io": false, 00:15:45.005 "nvme_io_md": false, 00:15:45.005 "write_zeroes": true, 00:15:45.005 "zcopy": false, 00:15:45.005 "get_zone_info": false, 00:15:45.005 "zone_management": false, 00:15:45.005 "zone_append": false, 00:15:45.005 "compare": false, 00:15:45.005 "compare_and_write": false, 00:15:45.005 "abort": false, 00:15:45.005 "seek_hole": false, 00:15:45.005 "seek_data": false, 00:15:45.005 "copy": false, 00:15:45.005 "nvme_iov_md": false 00:15:45.005 }, 00:15:45.005 "memory_domains": [ 00:15:45.005 { 00:15:45.005 "dma_device_id": "system", 00:15:45.005 "dma_device_type": 1 00:15:45.005 }, 00:15:45.005 { 00:15:45.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.005 "dma_device_type": 2 00:15:45.005 }, 00:15:45.005 { 00:15:45.005 "dma_device_id": "system", 00:15:45.005 "dma_device_type": 1 00:15:45.005 }, 00:15:45.005 { 00:15:45.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.005 "dma_device_type": 2 00:15:45.005 } 00:15:45.005 ], 00:15:45.005 "driver_specific": { 00:15:45.005 "raid": { 00:15:45.005 "uuid": "b163182e-a210-4b9b-9358-17f5b59583b3", 00:15:45.005 "strip_size_kb": 0, 00:15:45.005 "state": "online", 00:15:45.005 "raid_level": "raid1", 00:15:45.005 "superblock": true, 00:15:45.005 "num_base_bdevs": 2, 00:15:45.005 "num_base_bdevs_discovered": 2, 00:15:45.005 "num_base_bdevs_operational": 2, 00:15:45.005 "base_bdevs_list": [ 00:15:45.005 { 00:15:45.005 "name": "BaseBdev1", 00:15:45.005 "uuid": "2a73c8ef-4027-4104-8bac-5b3edbcc688c", 00:15:45.006 "is_configured": true, 00:15:45.006 "data_offset": 256, 00:15:45.006 "data_size": 7936 00:15:45.006 }, 00:15:45.006 { 00:15:45.006 "name": "BaseBdev2", 00:15:45.006 "uuid": "f9f6073e-92b4-4954-9227-dff18a2f43fd", 00:15:45.006 "is_configured": true, 00:15:45.006 "data_offset": 256, 00:15:45.006 "data_size": 7936 00:15:45.006 } 00:15:45.006 ] 00:15:45.006 } 00:15:45.006 } 00:15:45.006 }' 00:15:45.006 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:45.265 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:45.265 BaseBdev2' 00:15:45.265 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.265 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:15:45.265 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.265 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:45.265 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.265 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:45.265 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.265 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.265 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:45.265 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:45.265 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.265 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:45.265 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.265 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:45.266 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.266 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.266 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:45.266 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:45.266 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:45.266 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.266 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:45.266 [2024-12-06 11:58:42.935713] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:45.266 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.266 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:45.266 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:45.266 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:45.266 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:15:45.266 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:45.266 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:45.266 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.266 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.266 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:45.266 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:45.266 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:45.266 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.266 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.266 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.266 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.266 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.266 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.266 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.266 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:45.266 11:58:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.266 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.266 "name": "Existed_Raid", 00:15:45.266 "uuid": "b163182e-a210-4b9b-9358-17f5b59583b3", 00:15:45.266 "strip_size_kb": 0, 00:15:45.266 "state": "online", 00:15:45.266 "raid_level": "raid1", 00:15:45.266 "superblock": true, 00:15:45.266 "num_base_bdevs": 2, 00:15:45.266 "num_base_bdevs_discovered": 1, 00:15:45.266 "num_base_bdevs_operational": 1, 00:15:45.266 "base_bdevs_list": [ 00:15:45.266 { 00:15:45.266 "name": null, 00:15:45.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.266 "is_configured": false, 00:15:45.266 "data_offset": 0, 00:15:45.266 "data_size": 7936 00:15:45.266 }, 00:15:45.266 { 00:15:45.266 "name": "BaseBdev2", 00:15:45.266 "uuid": "f9f6073e-92b4-4954-9227-dff18a2f43fd", 00:15:45.266 "is_configured": true, 00:15:45.266 "data_offset": 256, 00:15:45.266 "data_size": 7936 00:15:45.266 } 00:15:45.266 ] 00:15:45.266 }' 00:15:45.266 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.266 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:45.838 [2024-12-06 11:58:43.466608] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:45.838 [2024-12-06 11:58:43.466710] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:45.838 [2024-12-06 11:58:43.478655] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:45.838 [2024-12-06 11:58:43.478700] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:45.838 [2024-12-06 11:58:43.478718] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 98358 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 98358 ']' 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 98358 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98358 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:45.838 killing process with pid 98358 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98358' 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 98358 00:15:45.838 [2024-12-06 11:58:43.569960] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:45.838 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 98358 00:15:45.838 [2024-12-06 11:58:43.570939] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:46.098 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:15:46.098 00:15:46.098 real 0m4.011s 00:15:46.098 user 0m6.277s 00:15:46.098 sys 0m0.861s 00:15:46.098 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:46.098 11:58:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:46.098 ************************************ 00:15:46.098 END TEST raid_state_function_test_sb_md_interleaved 00:15:46.098 ************************************ 00:15:46.359 11:58:43 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:15:46.359 11:58:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:46.359 11:58:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:46.359 11:58:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:46.359 ************************************ 00:15:46.359 START TEST raid_superblock_test_md_interleaved 00:15:46.359 ************************************ 00:15:46.359 11:58:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:15:46.359 11:58:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:46.359 11:58:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:46.359 11:58:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:46.359 11:58:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:46.359 11:58:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:46.359 11:58:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:46.359 11:58:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:46.359 11:58:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:46.359 11:58:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:46.359 11:58:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:46.359 11:58:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:46.359 11:58:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:46.359 11:58:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:46.359 11:58:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:46.359 11:58:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:46.359 11:58:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=98595 00:15:46.359 11:58:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:46.360 11:58:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 98595 00:15:46.360 11:58:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 98595 ']' 00:15:46.360 11:58:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.360 11:58:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:46.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.360 11:58:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.360 11:58:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:46.360 11:58:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:46.360 [2024-12-06 11:58:43.987707] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:46.360 [2024-12-06 11:58:43.987842] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98595 ] 00:15:46.621 [2024-12-06 11:58:44.133884] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.621 [2024-12-06 11:58:44.179635] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.621 [2024-12-06 11:58:44.222965] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:46.621 [2024-12-06 11:58:44.223003] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:47.195 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:47.195 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:15:47.195 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:47.195 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:47.195 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:47.195 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:47.195 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:47.195 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:47.195 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:47.195 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:47.195 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:15:47.195 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.195 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:47.195 malloc1 00:15:47.195 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.195 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:47.195 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.195 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:47.195 [2024-12-06 11:58:44.840931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:47.195 [2024-12-06 11:58:44.840991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.195 [2024-12-06 11:58:44.841028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:47.195 [2024-12-06 11:58:44.841039] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.195 [2024-12-06 11:58:44.842871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.195 [2024-12-06 11:58:44.842911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:47.195 pt1 00:15:47.195 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.195 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:47.195 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:47.196 malloc2 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:47.196 [2024-12-06 11:58:44.877631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:47.196 [2024-12-06 11:58:44.877689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.196 [2024-12-06 11:58:44.877708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:47.196 [2024-12-06 11:58:44.877721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.196 [2024-12-06 11:58:44.879568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.196 [2024-12-06 11:58:44.879607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:47.196 pt2 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:47.196 [2024-12-06 11:58:44.889637] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:47.196 [2024-12-06 11:58:44.891474] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:47.196 [2024-12-06 11:58:44.891633] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:47.196 [2024-12-06 11:58:44.891650] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:47.196 [2024-12-06 11:58:44.891754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:47.196 [2024-12-06 11:58:44.891822] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:47.196 [2024-12-06 11:58:44.891832] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:47.196 [2024-12-06 11:58:44.891903] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.196 "name": "raid_bdev1", 00:15:47.196 "uuid": "006b13f3-145d-4aa2-824f-37a542b43c5b", 00:15:47.196 "strip_size_kb": 0, 00:15:47.196 "state": "online", 00:15:47.196 "raid_level": "raid1", 00:15:47.196 "superblock": true, 00:15:47.196 "num_base_bdevs": 2, 00:15:47.196 "num_base_bdevs_discovered": 2, 00:15:47.196 "num_base_bdevs_operational": 2, 00:15:47.196 "base_bdevs_list": [ 00:15:47.196 { 00:15:47.196 "name": "pt1", 00:15:47.196 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:47.196 "is_configured": true, 00:15:47.196 "data_offset": 256, 00:15:47.196 "data_size": 7936 00:15:47.196 }, 00:15:47.196 { 00:15:47.196 "name": "pt2", 00:15:47.196 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.196 "is_configured": true, 00:15:47.196 "data_offset": 256, 00:15:47.196 "data_size": 7936 00:15:47.196 } 00:15:47.196 ] 00:15:47.196 }' 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.196 11:58:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:47.765 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:47.765 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:47.765 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:47.765 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:47.765 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:15:47.765 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:47.766 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:47.766 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.766 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:47.766 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:47.766 [2024-12-06 11:58:45.357329] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:47.766 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.766 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:47.766 "name": "raid_bdev1", 00:15:47.766 "aliases": [ 00:15:47.766 "006b13f3-145d-4aa2-824f-37a542b43c5b" 00:15:47.766 ], 00:15:47.766 "product_name": "Raid Volume", 00:15:47.766 "block_size": 4128, 00:15:47.766 "num_blocks": 7936, 00:15:47.766 "uuid": "006b13f3-145d-4aa2-824f-37a542b43c5b", 00:15:47.766 "md_size": 32, 00:15:47.766 "md_interleave": true, 00:15:47.766 "dif_type": 0, 00:15:47.766 "assigned_rate_limits": { 00:15:47.766 "rw_ios_per_sec": 0, 00:15:47.766 "rw_mbytes_per_sec": 0, 00:15:47.766 "r_mbytes_per_sec": 0, 00:15:47.766 "w_mbytes_per_sec": 0 00:15:47.766 }, 00:15:47.766 "claimed": false, 00:15:47.766 "zoned": false, 00:15:47.766 "supported_io_types": { 00:15:47.766 "read": true, 00:15:47.766 "write": true, 00:15:47.766 "unmap": false, 00:15:47.766 "flush": false, 00:15:47.766 "reset": true, 00:15:47.766 "nvme_admin": false, 00:15:47.766 "nvme_io": false, 00:15:47.766 "nvme_io_md": false, 00:15:47.766 "write_zeroes": true, 00:15:47.766 "zcopy": false, 00:15:47.766 "get_zone_info": false, 00:15:47.766 "zone_management": false, 00:15:47.766 "zone_append": false, 00:15:47.766 "compare": false, 00:15:47.766 "compare_and_write": false, 00:15:47.766 "abort": false, 00:15:47.766 "seek_hole": false, 00:15:47.766 "seek_data": false, 00:15:47.766 "copy": false, 00:15:47.766 "nvme_iov_md": false 00:15:47.766 }, 00:15:47.766 "memory_domains": [ 00:15:47.766 { 00:15:47.766 "dma_device_id": "system", 00:15:47.766 "dma_device_type": 1 00:15:47.766 }, 00:15:47.766 { 00:15:47.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.766 "dma_device_type": 2 00:15:47.766 }, 00:15:47.766 { 00:15:47.766 "dma_device_id": "system", 00:15:47.766 "dma_device_type": 1 00:15:47.766 }, 00:15:47.766 { 00:15:47.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.766 "dma_device_type": 2 00:15:47.766 } 00:15:47.766 ], 00:15:47.766 "driver_specific": { 00:15:47.766 "raid": { 00:15:47.766 "uuid": "006b13f3-145d-4aa2-824f-37a542b43c5b", 00:15:47.766 "strip_size_kb": 0, 00:15:47.766 "state": "online", 00:15:47.766 "raid_level": "raid1", 00:15:47.766 "superblock": true, 00:15:47.766 "num_base_bdevs": 2, 00:15:47.766 "num_base_bdevs_discovered": 2, 00:15:47.766 "num_base_bdevs_operational": 2, 00:15:47.766 "base_bdevs_list": [ 00:15:47.766 { 00:15:47.766 "name": "pt1", 00:15:47.766 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:47.766 "is_configured": true, 00:15:47.766 "data_offset": 256, 00:15:47.766 "data_size": 7936 00:15:47.766 }, 00:15:47.766 { 00:15:47.766 "name": "pt2", 00:15:47.766 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.766 "is_configured": true, 00:15:47.766 "data_offset": 256, 00:15:47.766 "data_size": 7936 00:15:47.766 } 00:15:47.766 ] 00:15:47.766 } 00:15:47.766 } 00:15:47.766 }' 00:15:47.766 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:47.766 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:47.766 pt2' 00:15:47.766 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.766 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:15:47.766 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.766 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:47.766 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.766 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:47.766 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.766 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.026 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:48.026 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:48.026 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.026 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:48.026 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.026 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:48.026 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.026 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.026 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:48.026 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:48.026 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:48.026 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:48.026 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.026 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:48.026 [2024-12-06 11:58:45.588852] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:48.026 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.026 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=006b13f3-145d-4aa2-824f-37a542b43c5b 00:15:48.026 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 006b13f3-145d-4aa2-824f-37a542b43c5b ']' 00:15:48.026 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:48.026 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.026 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:48.026 [2024-12-06 11:58:45.616593] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:48.026 [2024-12-06 11:58:45.616621] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:48.026 [2024-12-06 11:58:45.616685] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:48.026 [2024-12-06 11:58:45.616741] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:48.026 [2024-12-06 11:58:45.616755] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:48.026 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.026 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.026 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:48.026 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.026 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:48.026 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.026 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:48.027 [2024-12-06 11:58:45.756353] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:48.027 [2024-12-06 11:58:45.758191] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:48.027 [2024-12-06 11:58:45.758260] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:48.027 [2024-12-06 11:58:45.758308] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:48.027 [2024-12-06 11:58:45.758340] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:48.027 [2024-12-06 11:58:45.758353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:15:48.027 request: 00:15:48.027 { 00:15:48.027 "name": "raid_bdev1", 00:15:48.027 "raid_level": "raid1", 00:15:48.027 "base_bdevs": [ 00:15:48.027 "malloc1", 00:15:48.027 "malloc2" 00:15:48.027 ], 00:15:48.027 "superblock": false, 00:15:48.027 "method": "bdev_raid_create", 00:15:48.027 "req_id": 1 00:15:48.027 } 00:15:48.027 Got JSON-RPC error response 00:15:48.027 response: 00:15:48.027 { 00:15:48.027 "code": -17, 00:15:48.027 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:48.027 } 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:48.027 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.287 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:48.287 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:48.287 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:48.287 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.287 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:48.287 [2024-12-06 11:58:45.804248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:48.287 [2024-12-06 11:58:45.804293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.287 [2024-12-06 11:58:45.804324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:48.287 [2024-12-06 11:58:45.804332] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.287 [2024-12-06 11:58:45.806136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.287 [2024-12-06 11:58:45.806171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:48.287 [2024-12-06 11:58:45.806210] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:48.287 [2024-12-06 11:58:45.806255] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:48.287 pt1 00:15:48.287 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.287 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:48.287 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.287 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.287 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.287 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.287 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:48.287 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.287 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.287 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.287 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.287 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.287 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.287 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.287 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:48.287 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.287 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.287 "name": "raid_bdev1", 00:15:48.287 "uuid": "006b13f3-145d-4aa2-824f-37a542b43c5b", 00:15:48.287 "strip_size_kb": 0, 00:15:48.287 "state": "configuring", 00:15:48.287 "raid_level": "raid1", 00:15:48.287 "superblock": true, 00:15:48.287 "num_base_bdevs": 2, 00:15:48.287 "num_base_bdevs_discovered": 1, 00:15:48.287 "num_base_bdevs_operational": 2, 00:15:48.287 "base_bdevs_list": [ 00:15:48.287 { 00:15:48.287 "name": "pt1", 00:15:48.287 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:48.287 "is_configured": true, 00:15:48.287 "data_offset": 256, 00:15:48.287 "data_size": 7936 00:15:48.287 }, 00:15:48.287 { 00:15:48.287 "name": null, 00:15:48.287 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.287 "is_configured": false, 00:15:48.287 "data_offset": 256, 00:15:48.287 "data_size": 7936 00:15:48.287 } 00:15:48.287 ] 00:15:48.287 }' 00:15:48.287 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.287 11:58:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:48.547 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:48.547 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:48.547 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:48.547 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:48.547 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.547 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:48.547 [2024-12-06 11:58:46.259464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:48.547 [2024-12-06 11:58:46.259511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.547 [2024-12-06 11:58:46.259545] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:48.547 [2024-12-06 11:58:46.259554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.547 [2024-12-06 11:58:46.259668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.547 [2024-12-06 11:58:46.259686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:48.547 [2024-12-06 11:58:46.259722] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:48.547 [2024-12-06 11:58:46.259744] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:48.548 [2024-12-06 11:58:46.259829] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:48.548 [2024-12-06 11:58:46.259844] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:48.548 [2024-12-06 11:58:46.259911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:48.548 [2024-12-06 11:58:46.259968] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:48.548 [2024-12-06 11:58:46.259985] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:15:48.548 [2024-12-06 11:58:46.260032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.548 pt2 00:15:48.548 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.548 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:48.548 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:48.548 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:48.548 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.548 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.548 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.548 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.548 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:48.548 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.548 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.548 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.548 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.548 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.548 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.548 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.548 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:48.548 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.808 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.808 "name": "raid_bdev1", 00:15:48.808 "uuid": "006b13f3-145d-4aa2-824f-37a542b43c5b", 00:15:48.808 "strip_size_kb": 0, 00:15:48.808 "state": "online", 00:15:48.808 "raid_level": "raid1", 00:15:48.808 "superblock": true, 00:15:48.808 "num_base_bdevs": 2, 00:15:48.808 "num_base_bdevs_discovered": 2, 00:15:48.808 "num_base_bdevs_operational": 2, 00:15:48.808 "base_bdevs_list": [ 00:15:48.808 { 00:15:48.808 "name": "pt1", 00:15:48.808 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:48.808 "is_configured": true, 00:15:48.808 "data_offset": 256, 00:15:48.808 "data_size": 7936 00:15:48.808 }, 00:15:48.808 { 00:15:48.808 "name": "pt2", 00:15:48.808 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.808 "is_configured": true, 00:15:48.808 "data_offset": 256, 00:15:48.808 "data_size": 7936 00:15:48.808 } 00:15:48.808 ] 00:15:48.808 }' 00:15:48.808 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.808 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:49.068 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:49.068 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:49.068 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:49.068 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:49.068 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:15:49.068 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:49.068 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:49.068 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:49.068 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.068 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:49.068 [2024-12-06 11:58:46.742864] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.068 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.068 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:49.068 "name": "raid_bdev1", 00:15:49.068 "aliases": [ 00:15:49.068 "006b13f3-145d-4aa2-824f-37a542b43c5b" 00:15:49.068 ], 00:15:49.068 "product_name": "Raid Volume", 00:15:49.068 "block_size": 4128, 00:15:49.068 "num_blocks": 7936, 00:15:49.068 "uuid": "006b13f3-145d-4aa2-824f-37a542b43c5b", 00:15:49.068 "md_size": 32, 00:15:49.068 "md_interleave": true, 00:15:49.068 "dif_type": 0, 00:15:49.068 "assigned_rate_limits": { 00:15:49.068 "rw_ios_per_sec": 0, 00:15:49.068 "rw_mbytes_per_sec": 0, 00:15:49.068 "r_mbytes_per_sec": 0, 00:15:49.068 "w_mbytes_per_sec": 0 00:15:49.068 }, 00:15:49.068 "claimed": false, 00:15:49.069 "zoned": false, 00:15:49.069 "supported_io_types": { 00:15:49.069 "read": true, 00:15:49.069 "write": true, 00:15:49.069 "unmap": false, 00:15:49.069 "flush": false, 00:15:49.069 "reset": true, 00:15:49.069 "nvme_admin": false, 00:15:49.069 "nvme_io": false, 00:15:49.069 "nvme_io_md": false, 00:15:49.069 "write_zeroes": true, 00:15:49.069 "zcopy": false, 00:15:49.069 "get_zone_info": false, 00:15:49.069 "zone_management": false, 00:15:49.069 "zone_append": false, 00:15:49.069 "compare": false, 00:15:49.069 "compare_and_write": false, 00:15:49.069 "abort": false, 00:15:49.069 "seek_hole": false, 00:15:49.069 "seek_data": false, 00:15:49.069 "copy": false, 00:15:49.069 "nvme_iov_md": false 00:15:49.069 }, 00:15:49.069 "memory_domains": [ 00:15:49.069 { 00:15:49.069 "dma_device_id": "system", 00:15:49.069 "dma_device_type": 1 00:15:49.069 }, 00:15:49.069 { 00:15:49.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.069 "dma_device_type": 2 00:15:49.069 }, 00:15:49.069 { 00:15:49.069 "dma_device_id": "system", 00:15:49.069 "dma_device_type": 1 00:15:49.069 }, 00:15:49.069 { 00:15:49.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.069 "dma_device_type": 2 00:15:49.069 } 00:15:49.069 ], 00:15:49.069 "driver_specific": { 00:15:49.069 "raid": { 00:15:49.069 "uuid": "006b13f3-145d-4aa2-824f-37a542b43c5b", 00:15:49.069 "strip_size_kb": 0, 00:15:49.069 "state": "online", 00:15:49.069 "raid_level": "raid1", 00:15:49.069 "superblock": true, 00:15:49.069 "num_base_bdevs": 2, 00:15:49.069 "num_base_bdevs_discovered": 2, 00:15:49.069 "num_base_bdevs_operational": 2, 00:15:49.069 "base_bdevs_list": [ 00:15:49.069 { 00:15:49.069 "name": "pt1", 00:15:49.069 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:49.069 "is_configured": true, 00:15:49.069 "data_offset": 256, 00:15:49.069 "data_size": 7936 00:15:49.069 }, 00:15:49.069 { 00:15:49.069 "name": "pt2", 00:15:49.069 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.069 "is_configured": true, 00:15:49.069 "data_offset": 256, 00:15:49.069 "data_size": 7936 00:15:49.069 } 00:15:49.069 ] 00:15:49.069 } 00:15:49.069 } 00:15:49.069 }' 00:15:49.069 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:49.069 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:49.069 pt2' 00:15:49.069 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.330 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:15:49.330 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:49.330 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:49.330 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.330 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:49.330 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.330 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.330 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:49.330 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:49.330 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:49.330 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:49.330 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.330 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:49.330 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.330 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.330 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:49.330 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:49.330 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:49.330 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.330 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:49.330 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:49.330 [2024-12-06 11:58:46.970466] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.330 11:58:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.330 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 006b13f3-145d-4aa2-824f-37a542b43c5b '!=' 006b13f3-145d-4aa2-824f-37a542b43c5b ']' 00:15:49.330 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:49.330 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:49.330 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:15:49.330 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:49.330 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.330 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:49.330 [2024-12-06 11:58:47.018179] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:49.330 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.330 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:49.330 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.330 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.330 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.330 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.330 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:49.330 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.330 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.330 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.330 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.330 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.330 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.330 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.330 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:49.330 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.330 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.330 "name": "raid_bdev1", 00:15:49.330 "uuid": "006b13f3-145d-4aa2-824f-37a542b43c5b", 00:15:49.330 "strip_size_kb": 0, 00:15:49.330 "state": "online", 00:15:49.330 "raid_level": "raid1", 00:15:49.330 "superblock": true, 00:15:49.330 "num_base_bdevs": 2, 00:15:49.330 "num_base_bdevs_discovered": 1, 00:15:49.330 "num_base_bdevs_operational": 1, 00:15:49.330 "base_bdevs_list": [ 00:15:49.330 { 00:15:49.330 "name": null, 00:15:49.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.330 "is_configured": false, 00:15:49.330 "data_offset": 0, 00:15:49.330 "data_size": 7936 00:15:49.330 }, 00:15:49.330 { 00:15:49.330 "name": "pt2", 00:15:49.330 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.330 "is_configured": true, 00:15:49.330 "data_offset": 256, 00:15:49.330 "data_size": 7936 00:15:49.330 } 00:15:49.330 ] 00:15:49.330 }' 00:15:49.330 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.330 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:49.901 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:49.901 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.901 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:49.901 [2024-12-06 11:58:47.425428] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:49.901 [2024-12-06 11:58:47.425456] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:49.901 [2024-12-06 11:58:47.425503] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:49.901 [2024-12-06 11:58:47.425540] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:49.901 [2024-12-06 11:58:47.425550] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:15:49.901 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.901 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.901 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.901 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:49.901 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:49.901 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.901 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:49.901 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:49.901 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:49.901 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:49.901 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:49.901 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.901 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:49.901 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.901 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:49.901 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:49.901 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:49.901 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:49.901 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:15:49.901 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:49.901 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.901 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:49.901 [2024-12-06 11:58:47.497327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:49.901 [2024-12-06 11:58:47.497379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.901 [2024-12-06 11:58:47.497397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:49.901 [2024-12-06 11:58:47.497406] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.901 [2024-12-06 11:58:47.499267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.901 [2024-12-06 11:58:47.499301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:49.901 [2024-12-06 11:58:47.499345] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:49.901 [2024-12-06 11:58:47.499372] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:49.901 [2024-12-06 11:58:47.499448] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:15:49.901 [2024-12-06 11:58:47.499464] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:49.901 [2024-12-06 11:58:47.499537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:49.901 [2024-12-06 11:58:47.499598] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:15:49.902 [2024-12-06 11:58:47.499613] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:15:49.902 [2024-12-06 11:58:47.499664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.902 pt2 00:15:49.902 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.902 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:49.902 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.902 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.902 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.902 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.902 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:49.902 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.902 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.902 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.902 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.902 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.902 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.902 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.902 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:49.902 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.902 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.902 "name": "raid_bdev1", 00:15:49.902 "uuid": "006b13f3-145d-4aa2-824f-37a542b43c5b", 00:15:49.902 "strip_size_kb": 0, 00:15:49.902 "state": "online", 00:15:49.902 "raid_level": "raid1", 00:15:49.902 "superblock": true, 00:15:49.902 "num_base_bdevs": 2, 00:15:49.902 "num_base_bdevs_discovered": 1, 00:15:49.902 "num_base_bdevs_operational": 1, 00:15:49.902 "base_bdevs_list": [ 00:15:49.902 { 00:15:49.902 "name": null, 00:15:49.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.902 "is_configured": false, 00:15:49.902 "data_offset": 256, 00:15:49.902 "data_size": 7936 00:15:49.902 }, 00:15:49.902 { 00:15:49.902 "name": "pt2", 00:15:49.902 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.902 "is_configured": true, 00:15:49.902 "data_offset": 256, 00:15:49.902 "data_size": 7936 00:15:49.902 } 00:15:49.902 ] 00:15:49.902 }' 00:15:49.902 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.902 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.472 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:50.472 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.472 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.472 [2024-12-06 11:58:47.964494] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:50.472 [2024-12-06 11:58:47.964519] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:50.472 [2024-12-06 11:58:47.964574] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.472 [2024-12-06 11:58:47.964609] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.472 [2024-12-06 11:58:47.964620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:15:50.472 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.472 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.472 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:50.472 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.472 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.472 11:58:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.472 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:50.472 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:50.472 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:50.472 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:50.472 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.472 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.472 [2024-12-06 11:58:48.028400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:50.472 [2024-12-06 11:58:48.028450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.472 [2024-12-06 11:58:48.028467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:15:50.472 [2024-12-06 11:58:48.028481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.472 [2024-12-06 11:58:48.030082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.472 [2024-12-06 11:58:48.030119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:50.472 [2024-12-06 11:58:48.030158] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:50.472 [2024-12-06 11:58:48.030187] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:50.472 [2024-12-06 11:58:48.030287] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:50.472 [2024-12-06 11:58:48.030302] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:50.473 [2024-12-06 11:58:48.030316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:15:50.473 [2024-12-06 11:58:48.030345] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:50.473 [2024-12-06 11:58:48.030417] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:15:50.473 [2024-12-06 11:58:48.030428] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:50.473 [2024-12-06 11:58:48.030502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:50.473 [2024-12-06 11:58:48.030566] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:15:50.473 [2024-12-06 11:58:48.030573] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:15:50.473 [2024-12-06 11:58:48.030632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.473 pt1 00:15:50.473 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.473 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:50.473 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:50.473 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.473 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.473 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.473 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.473 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:50.473 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.473 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.473 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.473 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.473 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.473 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.473 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.473 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.473 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.473 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.473 "name": "raid_bdev1", 00:15:50.473 "uuid": "006b13f3-145d-4aa2-824f-37a542b43c5b", 00:15:50.473 "strip_size_kb": 0, 00:15:50.473 "state": "online", 00:15:50.473 "raid_level": "raid1", 00:15:50.473 "superblock": true, 00:15:50.473 "num_base_bdevs": 2, 00:15:50.473 "num_base_bdevs_discovered": 1, 00:15:50.473 "num_base_bdevs_operational": 1, 00:15:50.473 "base_bdevs_list": [ 00:15:50.473 { 00:15:50.473 "name": null, 00:15:50.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.473 "is_configured": false, 00:15:50.473 "data_offset": 256, 00:15:50.473 "data_size": 7936 00:15:50.473 }, 00:15:50.473 { 00:15:50.473 "name": "pt2", 00:15:50.473 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.473 "is_configured": true, 00:15:50.473 "data_offset": 256, 00:15:50.473 "data_size": 7936 00:15:50.473 } 00:15:50.473 ] 00:15:50.473 }' 00:15:50.473 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.473 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.733 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:50.733 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:50.733 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.733 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.994 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.994 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:50.994 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:50.994 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.994 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:50.994 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.994 [2024-12-06 11:58:48.535733] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.994 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.994 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 006b13f3-145d-4aa2-824f-37a542b43c5b '!=' 006b13f3-145d-4aa2-824f-37a542b43c5b ']' 00:15:50.994 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 98595 00:15:50.994 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 98595 ']' 00:15:50.994 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 98595 00:15:50.994 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:15:50.994 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:50.994 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98595 00:15:50.994 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:50.994 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:50.994 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98595' 00:15:50.994 killing process with pid 98595 00:15:50.994 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 98595 00:15:50.994 [2024-12-06 11:58:48.603028] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:50.994 [2024-12-06 11:58:48.603139] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.994 [2024-12-06 11:58:48.603208] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.994 [2024-12-06 11:58:48.603271] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, sta 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 98595 00:15:50.994 te offline 00:15:50.994 [2024-12-06 11:58:48.627480] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:51.255 ************************************ 00:15:51.255 END TEST raid_superblock_test_md_interleaved 00:15:51.255 ************************************ 00:15:51.255 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:15:51.255 00:15:51.255 real 0m4.965s 00:15:51.255 user 0m8.105s 00:15:51.255 sys 0m1.062s 00:15:51.255 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:51.255 11:58:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.255 11:58:48 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:15:51.255 11:58:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:51.256 11:58:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:51.256 11:58:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:51.256 ************************************ 00:15:51.256 START TEST raid_rebuild_test_sb_md_interleaved 00:15:51.256 ************************************ 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=98913 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 98913 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 98913 ']' 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:51.256 11:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.516 [2024-12-06 11:58:49.036421] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:51.516 [2024-12-06 11:58:49.036651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:51.517 Zero copy mechanism will not be used. 00:15:51.517 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98913 ] 00:15:51.517 [2024-12-06 11:58:49.174929] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.517 [2024-12-06 11:58:49.218749] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.517 [2024-12-06 11:58:49.261287] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:51.517 [2024-12-06 11:58:49.261397] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.458 BaseBdev1_malloc 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.458 [2024-12-06 11:58:49.879281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:52.458 [2024-12-06 11:58:49.879346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.458 [2024-12-06 11:58:49.879373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:52.458 [2024-12-06 11:58:49.879381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.458 [2024-12-06 11:58:49.881240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.458 [2024-12-06 11:58:49.881286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:52.458 BaseBdev1 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.458 BaseBdev2_malloc 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.458 [2024-12-06 11:58:49.921889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:52.458 [2024-12-06 11:58:49.921991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.458 [2024-12-06 11:58:49.922036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:52.458 [2024-12-06 11:58:49.922057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.458 [2024-12-06 11:58:49.926370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.458 [2024-12-06 11:58:49.926442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:52.458 BaseBdev2 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.458 spare_malloc 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.458 spare_delay 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.458 [2024-12-06 11:58:49.964731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:52.458 [2024-12-06 11:58:49.964785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.458 [2024-12-06 11:58:49.964806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:52.458 [2024-12-06 11:58:49.964814] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.458 [2024-12-06 11:58:49.966689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.458 [2024-12-06 11:58:49.966725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:52.458 spare 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.458 [2024-12-06 11:58:49.976765] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:52.458 [2024-12-06 11:58:49.978527] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:52.458 [2024-12-06 11:58:49.978693] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:52.458 [2024-12-06 11:58:49.978706] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:52.458 [2024-12-06 11:58:49.978805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:52.458 [2024-12-06 11:58:49.978876] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:52.458 [2024-12-06 11:58:49.978887] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:52.458 [2024-12-06 11:58:49.978963] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.458 11:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.458 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.458 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.458 "name": "raid_bdev1", 00:15:52.458 "uuid": "cf5e0d19-9a7e-4ca4-84dc-bb090783f7e4", 00:15:52.458 "strip_size_kb": 0, 00:15:52.459 "state": "online", 00:15:52.459 "raid_level": "raid1", 00:15:52.459 "superblock": true, 00:15:52.459 "num_base_bdevs": 2, 00:15:52.459 "num_base_bdevs_discovered": 2, 00:15:52.459 "num_base_bdevs_operational": 2, 00:15:52.459 "base_bdevs_list": [ 00:15:52.459 { 00:15:52.459 "name": "BaseBdev1", 00:15:52.459 "uuid": "568dbdfc-7a63-5f0a-bd82-55a600ac764e", 00:15:52.459 "is_configured": true, 00:15:52.459 "data_offset": 256, 00:15:52.459 "data_size": 7936 00:15:52.459 }, 00:15:52.459 { 00:15:52.459 "name": "BaseBdev2", 00:15:52.459 "uuid": "ae0f32db-41d3-5f6e-a0a5-3adb8c47c541", 00:15:52.459 "is_configured": true, 00:15:52.459 "data_offset": 256, 00:15:52.459 "data_size": 7936 00:15:52.459 } 00:15:52.459 ] 00:15:52.459 }' 00:15:52.459 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.459 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.718 [2024-12-06 11:58:50.380267] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.718 [2024-12-06 11:58:50.439917] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.718 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.978 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.978 "name": "raid_bdev1", 00:15:52.978 "uuid": "cf5e0d19-9a7e-4ca4-84dc-bb090783f7e4", 00:15:52.978 "strip_size_kb": 0, 00:15:52.978 "state": "online", 00:15:52.978 "raid_level": "raid1", 00:15:52.978 "superblock": true, 00:15:52.978 "num_base_bdevs": 2, 00:15:52.978 "num_base_bdevs_discovered": 1, 00:15:52.978 "num_base_bdevs_operational": 1, 00:15:52.978 "base_bdevs_list": [ 00:15:52.978 { 00:15:52.978 "name": null, 00:15:52.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.978 "is_configured": false, 00:15:52.978 "data_offset": 0, 00:15:52.978 "data_size": 7936 00:15:52.978 }, 00:15:52.978 { 00:15:52.978 "name": "BaseBdev2", 00:15:52.978 "uuid": "ae0f32db-41d3-5f6e-a0a5-3adb8c47c541", 00:15:52.978 "is_configured": true, 00:15:52.978 "data_offset": 256, 00:15:52.979 "data_size": 7936 00:15:52.979 } 00:15:52.979 ] 00:15:52.979 }' 00:15:52.979 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.979 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:53.239 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:53.239 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.239 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:53.239 [2024-12-06 11:58:50.915124] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:53.239 [2024-12-06 11:58:50.918071] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:53.239 [2024-12-06 11:58:50.919971] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:53.239 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.239 11:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:54.181 11:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:54.181 11:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.181 11:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:54.181 11:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:54.181 11:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.181 11:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.181 11:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.181 11:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.181 11:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.442 11:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.442 11:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.442 "name": "raid_bdev1", 00:15:54.442 "uuid": "cf5e0d19-9a7e-4ca4-84dc-bb090783f7e4", 00:15:54.442 "strip_size_kb": 0, 00:15:54.442 "state": "online", 00:15:54.442 "raid_level": "raid1", 00:15:54.442 "superblock": true, 00:15:54.442 "num_base_bdevs": 2, 00:15:54.442 "num_base_bdevs_discovered": 2, 00:15:54.442 "num_base_bdevs_operational": 2, 00:15:54.442 "process": { 00:15:54.442 "type": "rebuild", 00:15:54.442 "target": "spare", 00:15:54.442 "progress": { 00:15:54.442 "blocks": 2560, 00:15:54.442 "percent": 32 00:15:54.442 } 00:15:54.442 }, 00:15:54.442 "base_bdevs_list": [ 00:15:54.442 { 00:15:54.442 "name": "spare", 00:15:54.442 "uuid": "30eaa8ad-5fe6-553a-85d8-cf928268b6c8", 00:15:54.442 "is_configured": true, 00:15:54.442 "data_offset": 256, 00:15:54.442 "data_size": 7936 00:15:54.442 }, 00:15:54.442 { 00:15:54.442 "name": "BaseBdev2", 00:15:54.442 "uuid": "ae0f32db-41d3-5f6e-a0a5-3adb8c47c541", 00:15:54.442 "is_configured": true, 00:15:54.442 "data_offset": 256, 00:15:54.442 "data_size": 7936 00:15:54.442 } 00:15:54.442 ] 00:15:54.442 }' 00:15:54.442 11:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.442 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:54.442 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.442 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:54.442 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:54.442 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.442 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.442 [2024-12-06 11:58:52.079419] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:54.442 [2024-12-06 11:58:52.124718] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:54.442 [2024-12-06 11:58:52.124790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.442 [2024-12-06 11:58:52.124807] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:54.442 [2024-12-06 11:58:52.124821] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:54.442 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.442 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:54.442 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.442 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.442 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.442 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.442 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:54.442 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.442 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.442 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.442 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.442 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.442 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.442 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.442 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.442 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.442 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.442 "name": "raid_bdev1", 00:15:54.442 "uuid": "cf5e0d19-9a7e-4ca4-84dc-bb090783f7e4", 00:15:54.442 "strip_size_kb": 0, 00:15:54.442 "state": "online", 00:15:54.442 "raid_level": "raid1", 00:15:54.442 "superblock": true, 00:15:54.442 "num_base_bdevs": 2, 00:15:54.442 "num_base_bdevs_discovered": 1, 00:15:54.442 "num_base_bdevs_operational": 1, 00:15:54.442 "base_bdevs_list": [ 00:15:54.442 { 00:15:54.442 "name": null, 00:15:54.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.442 "is_configured": false, 00:15:54.442 "data_offset": 0, 00:15:54.442 "data_size": 7936 00:15:54.442 }, 00:15:54.442 { 00:15:54.442 "name": "BaseBdev2", 00:15:54.442 "uuid": "ae0f32db-41d3-5f6e-a0a5-3adb8c47c541", 00:15:54.442 "is_configured": true, 00:15:54.442 "data_offset": 256, 00:15:54.442 "data_size": 7936 00:15:54.442 } 00:15:54.442 ] 00:15:54.442 }' 00:15:54.442 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.442 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.014 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:55.014 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.014 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:55.014 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:55.014 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.014 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.014 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.014 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.014 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.014 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.014 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.014 "name": "raid_bdev1", 00:15:55.014 "uuid": "cf5e0d19-9a7e-4ca4-84dc-bb090783f7e4", 00:15:55.014 "strip_size_kb": 0, 00:15:55.014 "state": "online", 00:15:55.014 "raid_level": "raid1", 00:15:55.014 "superblock": true, 00:15:55.014 "num_base_bdevs": 2, 00:15:55.014 "num_base_bdevs_discovered": 1, 00:15:55.014 "num_base_bdevs_operational": 1, 00:15:55.014 "base_bdevs_list": [ 00:15:55.014 { 00:15:55.014 "name": null, 00:15:55.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.014 "is_configured": false, 00:15:55.014 "data_offset": 0, 00:15:55.014 "data_size": 7936 00:15:55.014 }, 00:15:55.014 { 00:15:55.015 "name": "BaseBdev2", 00:15:55.015 "uuid": "ae0f32db-41d3-5f6e-a0a5-3adb8c47c541", 00:15:55.015 "is_configured": true, 00:15:55.015 "data_offset": 256, 00:15:55.015 "data_size": 7936 00:15:55.015 } 00:15:55.015 ] 00:15:55.015 }' 00:15:55.015 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.015 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:55.015 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.015 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:55.015 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:55.015 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.015 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.015 [2024-12-06 11:58:52.675487] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:55.015 [2024-12-06 11:58:52.677948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:55.015 [2024-12-06 11:58:52.679852] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:55.015 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.015 11:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:55.956 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:55.956 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.956 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:55.956 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:55.956 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.956 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.956 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.956 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.956 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.217 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.217 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.217 "name": "raid_bdev1", 00:15:56.217 "uuid": "cf5e0d19-9a7e-4ca4-84dc-bb090783f7e4", 00:15:56.217 "strip_size_kb": 0, 00:15:56.217 "state": "online", 00:15:56.217 "raid_level": "raid1", 00:15:56.217 "superblock": true, 00:15:56.217 "num_base_bdevs": 2, 00:15:56.217 "num_base_bdevs_discovered": 2, 00:15:56.217 "num_base_bdevs_operational": 2, 00:15:56.217 "process": { 00:15:56.217 "type": "rebuild", 00:15:56.217 "target": "spare", 00:15:56.217 "progress": { 00:15:56.217 "blocks": 2560, 00:15:56.217 "percent": 32 00:15:56.217 } 00:15:56.217 }, 00:15:56.217 "base_bdevs_list": [ 00:15:56.217 { 00:15:56.217 "name": "spare", 00:15:56.217 "uuid": "30eaa8ad-5fe6-553a-85d8-cf928268b6c8", 00:15:56.217 "is_configured": true, 00:15:56.217 "data_offset": 256, 00:15:56.217 "data_size": 7936 00:15:56.217 }, 00:15:56.217 { 00:15:56.217 "name": "BaseBdev2", 00:15:56.217 "uuid": "ae0f32db-41d3-5f6e-a0a5-3adb8c47c541", 00:15:56.217 "is_configured": true, 00:15:56.217 "data_offset": 256, 00:15:56.217 "data_size": 7936 00:15:56.217 } 00:15:56.217 ] 00:15:56.217 }' 00:15:56.217 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.217 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:56.217 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.217 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:56.217 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:56.217 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:56.217 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:56.217 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:56.217 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:56.217 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:56.217 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=607 00:15:56.217 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:56.217 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:56.217 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.217 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:56.217 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:56.217 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.217 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.217 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.217 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.217 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.217 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.217 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.217 "name": "raid_bdev1", 00:15:56.217 "uuid": "cf5e0d19-9a7e-4ca4-84dc-bb090783f7e4", 00:15:56.217 "strip_size_kb": 0, 00:15:56.217 "state": "online", 00:15:56.217 "raid_level": "raid1", 00:15:56.217 "superblock": true, 00:15:56.217 "num_base_bdevs": 2, 00:15:56.217 "num_base_bdevs_discovered": 2, 00:15:56.217 "num_base_bdevs_operational": 2, 00:15:56.217 "process": { 00:15:56.217 "type": "rebuild", 00:15:56.217 "target": "spare", 00:15:56.217 "progress": { 00:15:56.217 "blocks": 2816, 00:15:56.217 "percent": 35 00:15:56.217 } 00:15:56.217 }, 00:15:56.217 "base_bdevs_list": [ 00:15:56.217 { 00:15:56.217 "name": "spare", 00:15:56.217 "uuid": "30eaa8ad-5fe6-553a-85d8-cf928268b6c8", 00:15:56.217 "is_configured": true, 00:15:56.217 "data_offset": 256, 00:15:56.217 "data_size": 7936 00:15:56.217 }, 00:15:56.217 { 00:15:56.217 "name": "BaseBdev2", 00:15:56.217 "uuid": "ae0f32db-41d3-5f6e-a0a5-3adb8c47c541", 00:15:56.217 "is_configured": true, 00:15:56.217 "data_offset": 256, 00:15:56.217 "data_size": 7936 00:15:56.217 } 00:15:56.217 ] 00:15:56.217 }' 00:15:56.217 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.217 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:56.218 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.218 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:56.218 11:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:57.600 11:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:57.600 11:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:57.600 11:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.600 11:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:57.600 11:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:57.600 11:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.600 11:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.600 11:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.600 11:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.600 11:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.600 11:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.600 11:58:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.600 "name": "raid_bdev1", 00:15:57.600 "uuid": "cf5e0d19-9a7e-4ca4-84dc-bb090783f7e4", 00:15:57.600 "strip_size_kb": 0, 00:15:57.600 "state": "online", 00:15:57.600 "raid_level": "raid1", 00:15:57.600 "superblock": true, 00:15:57.600 "num_base_bdevs": 2, 00:15:57.600 "num_base_bdevs_discovered": 2, 00:15:57.600 "num_base_bdevs_operational": 2, 00:15:57.600 "process": { 00:15:57.600 "type": "rebuild", 00:15:57.600 "target": "spare", 00:15:57.600 "progress": { 00:15:57.600 "blocks": 5632, 00:15:57.600 "percent": 70 00:15:57.600 } 00:15:57.600 }, 00:15:57.600 "base_bdevs_list": [ 00:15:57.600 { 00:15:57.600 "name": "spare", 00:15:57.600 "uuid": "30eaa8ad-5fe6-553a-85d8-cf928268b6c8", 00:15:57.600 "is_configured": true, 00:15:57.600 "data_offset": 256, 00:15:57.600 "data_size": 7936 00:15:57.600 }, 00:15:57.600 { 00:15:57.600 "name": "BaseBdev2", 00:15:57.600 "uuid": "ae0f32db-41d3-5f6e-a0a5-3adb8c47c541", 00:15:57.600 "is_configured": true, 00:15:57.600 "data_offset": 256, 00:15:57.600 "data_size": 7936 00:15:57.600 } 00:15:57.600 ] 00:15:57.600 }' 00:15:57.600 11:58:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.600 11:58:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:57.600 11:58:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.600 11:58:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:57.601 11:58:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:58.171 [2024-12-06 11:58:55.790220] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:58.171 [2024-12-06 11:58:55.790365] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:58.171 [2024-12-06 11:58:55.790511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.431 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:58.431 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.431 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.431 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.431 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.431 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.431 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.431 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.431 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.431 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.431 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.431 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.431 "name": "raid_bdev1", 00:15:58.431 "uuid": "cf5e0d19-9a7e-4ca4-84dc-bb090783f7e4", 00:15:58.431 "strip_size_kb": 0, 00:15:58.431 "state": "online", 00:15:58.431 "raid_level": "raid1", 00:15:58.431 "superblock": true, 00:15:58.431 "num_base_bdevs": 2, 00:15:58.431 "num_base_bdevs_discovered": 2, 00:15:58.431 "num_base_bdevs_operational": 2, 00:15:58.431 "base_bdevs_list": [ 00:15:58.431 { 00:15:58.431 "name": "spare", 00:15:58.431 "uuid": "30eaa8ad-5fe6-553a-85d8-cf928268b6c8", 00:15:58.431 "is_configured": true, 00:15:58.431 "data_offset": 256, 00:15:58.431 "data_size": 7936 00:15:58.431 }, 00:15:58.431 { 00:15:58.431 "name": "BaseBdev2", 00:15:58.431 "uuid": "ae0f32db-41d3-5f6e-a0a5-3adb8c47c541", 00:15:58.431 "is_configured": true, 00:15:58.431 "data_offset": 256, 00:15:58.431 "data_size": 7936 00:15:58.431 } 00:15:58.431 ] 00:15:58.431 }' 00:15:58.431 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.693 "name": "raid_bdev1", 00:15:58.693 "uuid": "cf5e0d19-9a7e-4ca4-84dc-bb090783f7e4", 00:15:58.693 "strip_size_kb": 0, 00:15:58.693 "state": "online", 00:15:58.693 "raid_level": "raid1", 00:15:58.693 "superblock": true, 00:15:58.693 "num_base_bdevs": 2, 00:15:58.693 "num_base_bdevs_discovered": 2, 00:15:58.693 "num_base_bdevs_operational": 2, 00:15:58.693 "base_bdevs_list": [ 00:15:58.693 { 00:15:58.693 "name": "spare", 00:15:58.693 "uuid": "30eaa8ad-5fe6-553a-85d8-cf928268b6c8", 00:15:58.693 "is_configured": true, 00:15:58.693 "data_offset": 256, 00:15:58.693 "data_size": 7936 00:15:58.693 }, 00:15:58.693 { 00:15:58.693 "name": "BaseBdev2", 00:15:58.693 "uuid": "ae0f32db-41d3-5f6e-a0a5-3adb8c47c541", 00:15:58.693 "is_configured": true, 00:15:58.693 "data_offset": 256, 00:15:58.693 "data_size": 7936 00:15:58.693 } 00:15:58.693 ] 00:15:58.693 }' 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.693 "name": "raid_bdev1", 00:15:58.693 "uuid": "cf5e0d19-9a7e-4ca4-84dc-bb090783f7e4", 00:15:58.693 "strip_size_kb": 0, 00:15:58.693 "state": "online", 00:15:58.693 "raid_level": "raid1", 00:15:58.693 "superblock": true, 00:15:58.693 "num_base_bdevs": 2, 00:15:58.693 "num_base_bdevs_discovered": 2, 00:15:58.693 "num_base_bdevs_operational": 2, 00:15:58.693 "base_bdevs_list": [ 00:15:58.693 { 00:15:58.693 "name": "spare", 00:15:58.693 "uuid": "30eaa8ad-5fe6-553a-85d8-cf928268b6c8", 00:15:58.693 "is_configured": true, 00:15:58.693 "data_offset": 256, 00:15:58.693 "data_size": 7936 00:15:58.693 }, 00:15:58.693 { 00:15:58.693 "name": "BaseBdev2", 00:15:58.693 "uuid": "ae0f32db-41d3-5f6e-a0a5-3adb8c47c541", 00:15:58.693 "is_configured": true, 00:15:58.693 "data_offset": 256, 00:15:58.693 "data_size": 7936 00:15:58.693 } 00:15:58.693 ] 00:15:58.693 }' 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.693 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.264 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:59.264 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.265 [2024-12-06 11:58:56.739818] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:59.265 [2024-12-06 11:58:56.739846] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:59.265 [2024-12-06 11:58:56.739916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:59.265 [2024-12-06 11:58:56.739982] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:59.265 [2024-12-06 11:58:56.739994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.265 [2024-12-06 11:58:56.795706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:59.265 [2024-12-06 11:58:56.795823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.265 [2024-12-06 11:58:56.795862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:59.265 [2024-12-06 11:58:56.795892] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.265 [2024-12-06 11:58:56.797866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.265 [2024-12-06 11:58:56.797956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:59.265 [2024-12-06 11:58:56.798026] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:59.265 [2024-12-06 11:58:56.798114] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:59.265 [2024-12-06 11:58:56.798217] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:59.265 spare 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.265 [2024-12-06 11:58:56.898155] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:15:59.265 [2024-12-06 11:58:56.898222] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:59.265 [2024-12-06 11:58:56.898360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:15:59.265 [2024-12-06 11:58:56.898465] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:15:59.265 [2024-12-06 11:58:56.898518] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:15:59.265 [2024-12-06 11:58:56.898632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.265 "name": "raid_bdev1", 00:15:59.265 "uuid": "cf5e0d19-9a7e-4ca4-84dc-bb090783f7e4", 00:15:59.265 "strip_size_kb": 0, 00:15:59.265 "state": "online", 00:15:59.265 "raid_level": "raid1", 00:15:59.265 "superblock": true, 00:15:59.265 "num_base_bdevs": 2, 00:15:59.265 "num_base_bdevs_discovered": 2, 00:15:59.265 "num_base_bdevs_operational": 2, 00:15:59.265 "base_bdevs_list": [ 00:15:59.265 { 00:15:59.265 "name": "spare", 00:15:59.265 "uuid": "30eaa8ad-5fe6-553a-85d8-cf928268b6c8", 00:15:59.265 "is_configured": true, 00:15:59.265 "data_offset": 256, 00:15:59.265 "data_size": 7936 00:15:59.265 }, 00:15:59.265 { 00:15:59.265 "name": "BaseBdev2", 00:15:59.265 "uuid": "ae0f32db-41d3-5f6e-a0a5-3adb8c47c541", 00:15:59.265 "is_configured": true, 00:15:59.265 "data_offset": 256, 00:15:59.265 "data_size": 7936 00:15:59.265 } 00:15:59.265 ] 00:15:59.265 }' 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.265 11:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.525 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:59.525 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.525 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:59.525 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:59.525 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.525 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.525 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.525 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.525 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.785 "name": "raid_bdev1", 00:15:59.785 "uuid": "cf5e0d19-9a7e-4ca4-84dc-bb090783f7e4", 00:15:59.785 "strip_size_kb": 0, 00:15:59.785 "state": "online", 00:15:59.785 "raid_level": "raid1", 00:15:59.785 "superblock": true, 00:15:59.785 "num_base_bdevs": 2, 00:15:59.785 "num_base_bdevs_discovered": 2, 00:15:59.785 "num_base_bdevs_operational": 2, 00:15:59.785 "base_bdevs_list": [ 00:15:59.785 { 00:15:59.785 "name": "spare", 00:15:59.785 "uuid": "30eaa8ad-5fe6-553a-85d8-cf928268b6c8", 00:15:59.785 "is_configured": true, 00:15:59.785 "data_offset": 256, 00:15:59.785 "data_size": 7936 00:15:59.785 }, 00:15:59.785 { 00:15:59.785 "name": "BaseBdev2", 00:15:59.785 "uuid": "ae0f32db-41d3-5f6e-a0a5-3adb8c47c541", 00:15:59.785 "is_configured": true, 00:15:59.785 "data_offset": 256, 00:15:59.785 "data_size": 7936 00:15:59.785 } 00:15:59.785 ] 00:15:59.785 }' 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.785 [2024-12-06 11:58:57.418735] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.785 "name": "raid_bdev1", 00:15:59.785 "uuid": "cf5e0d19-9a7e-4ca4-84dc-bb090783f7e4", 00:15:59.785 "strip_size_kb": 0, 00:15:59.785 "state": "online", 00:15:59.785 "raid_level": "raid1", 00:15:59.785 "superblock": true, 00:15:59.785 "num_base_bdevs": 2, 00:15:59.785 "num_base_bdevs_discovered": 1, 00:15:59.785 "num_base_bdevs_operational": 1, 00:15:59.785 "base_bdevs_list": [ 00:15:59.785 { 00:15:59.785 "name": null, 00:15:59.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.785 "is_configured": false, 00:15:59.785 "data_offset": 0, 00:15:59.785 "data_size": 7936 00:15:59.785 }, 00:15:59.785 { 00:15:59.785 "name": "BaseBdev2", 00:15:59.785 "uuid": "ae0f32db-41d3-5f6e-a0a5-3adb8c47c541", 00:15:59.785 "is_configured": true, 00:15:59.785 "data_offset": 256, 00:15:59.785 "data_size": 7936 00:15:59.785 } 00:15:59.785 ] 00:15:59.785 }' 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.785 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.354 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:00.354 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.354 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.354 [2024-12-06 11:58:57.869967] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:00.354 [2024-12-06 11:58:57.870096] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:00.354 [2024-12-06 11:58:57.870111] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:00.354 [2024-12-06 11:58:57.870156] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:00.355 [2024-12-06 11:58:57.872933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:16:00.355 [2024-12-06 11:58:57.874807] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:00.355 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.355 11:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:01.295 11:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.295 11:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.295 11:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.295 11:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.295 11:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.295 11:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.295 11:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.295 11:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.295 11:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.296 11:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.296 11:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.296 "name": "raid_bdev1", 00:16:01.296 "uuid": "cf5e0d19-9a7e-4ca4-84dc-bb090783f7e4", 00:16:01.296 "strip_size_kb": 0, 00:16:01.296 "state": "online", 00:16:01.296 "raid_level": "raid1", 00:16:01.296 "superblock": true, 00:16:01.296 "num_base_bdevs": 2, 00:16:01.296 "num_base_bdevs_discovered": 2, 00:16:01.296 "num_base_bdevs_operational": 2, 00:16:01.296 "process": { 00:16:01.296 "type": "rebuild", 00:16:01.296 "target": "spare", 00:16:01.296 "progress": { 00:16:01.296 "blocks": 2560, 00:16:01.296 "percent": 32 00:16:01.296 } 00:16:01.296 }, 00:16:01.296 "base_bdevs_list": [ 00:16:01.296 { 00:16:01.296 "name": "spare", 00:16:01.296 "uuid": "30eaa8ad-5fe6-553a-85d8-cf928268b6c8", 00:16:01.296 "is_configured": true, 00:16:01.296 "data_offset": 256, 00:16:01.296 "data_size": 7936 00:16:01.296 }, 00:16:01.296 { 00:16:01.296 "name": "BaseBdev2", 00:16:01.296 "uuid": "ae0f32db-41d3-5f6e-a0a5-3adb8c47c541", 00:16:01.296 "is_configured": true, 00:16:01.296 "data_offset": 256, 00:16:01.296 "data_size": 7936 00:16:01.296 } 00:16:01.296 ] 00:16:01.296 }' 00:16:01.296 11:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.296 11:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.296 11:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.296 11:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.296 11:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:01.296 11:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.296 11:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.296 [2024-12-06 11:58:59.017563] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:01.556 [2024-12-06 11:58:59.078789] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:01.556 [2024-12-06 11:58:59.078888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.556 [2024-12-06 11:58:59.078941] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:01.557 [2024-12-06 11:58:59.078962] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:01.557 11:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.557 11:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:01.557 11:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.557 11:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.557 11:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.557 11:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.557 11:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:01.557 11:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.557 11:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.557 11:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.557 11:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.557 11:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.557 11:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.557 11:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.557 11:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.557 11:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.557 11:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.557 "name": "raid_bdev1", 00:16:01.557 "uuid": "cf5e0d19-9a7e-4ca4-84dc-bb090783f7e4", 00:16:01.557 "strip_size_kb": 0, 00:16:01.557 "state": "online", 00:16:01.557 "raid_level": "raid1", 00:16:01.557 "superblock": true, 00:16:01.557 "num_base_bdevs": 2, 00:16:01.557 "num_base_bdevs_discovered": 1, 00:16:01.557 "num_base_bdevs_operational": 1, 00:16:01.557 "base_bdevs_list": [ 00:16:01.557 { 00:16:01.557 "name": null, 00:16:01.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.557 "is_configured": false, 00:16:01.557 "data_offset": 0, 00:16:01.557 "data_size": 7936 00:16:01.557 }, 00:16:01.557 { 00:16:01.557 "name": "BaseBdev2", 00:16:01.557 "uuid": "ae0f32db-41d3-5f6e-a0a5-3adb8c47c541", 00:16:01.557 "is_configured": true, 00:16:01.557 "data_offset": 256, 00:16:01.557 "data_size": 7936 00:16:01.557 } 00:16:01.557 ] 00:16:01.557 }' 00:16:01.557 11:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.557 11:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.817 11:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:01.817 11:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.817 11:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.817 [2024-12-06 11:58:59.513534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:01.817 [2024-12-06 11:58:59.513642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.817 [2024-12-06 11:58:59.513699] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:01.817 [2024-12-06 11:58:59.513727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.817 [2024-12-06 11:58:59.513906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.817 [2024-12-06 11:58:59.513949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:01.817 [2024-12-06 11:58:59.514020] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:01.817 [2024-12-06 11:58:59.514053] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:01.817 [2024-12-06 11:58:59.514091] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:01.817 [2024-12-06 11:58:59.514153] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:01.817 [2024-12-06 11:58:59.516468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:16:01.817 [2024-12-06 11:58:59.518344] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:01.817 spare 00:16:01.817 11:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.817 11:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.198 "name": "raid_bdev1", 00:16:03.198 "uuid": "cf5e0d19-9a7e-4ca4-84dc-bb090783f7e4", 00:16:03.198 "strip_size_kb": 0, 00:16:03.198 "state": "online", 00:16:03.198 "raid_level": "raid1", 00:16:03.198 "superblock": true, 00:16:03.198 "num_base_bdevs": 2, 00:16:03.198 "num_base_bdevs_discovered": 2, 00:16:03.198 "num_base_bdevs_operational": 2, 00:16:03.198 "process": { 00:16:03.198 "type": "rebuild", 00:16:03.198 "target": "spare", 00:16:03.198 "progress": { 00:16:03.198 "blocks": 2560, 00:16:03.198 "percent": 32 00:16:03.198 } 00:16:03.198 }, 00:16:03.198 "base_bdevs_list": [ 00:16:03.198 { 00:16:03.198 "name": "spare", 00:16:03.198 "uuid": "30eaa8ad-5fe6-553a-85d8-cf928268b6c8", 00:16:03.198 "is_configured": true, 00:16:03.198 "data_offset": 256, 00:16:03.198 "data_size": 7936 00:16:03.198 }, 00:16:03.198 { 00:16:03.198 "name": "BaseBdev2", 00:16:03.198 "uuid": "ae0f32db-41d3-5f6e-a0a5-3adb8c47c541", 00:16:03.198 "is_configured": true, 00:16:03.198 "data_offset": 256, 00:16:03.198 "data_size": 7936 00:16:03.198 } 00:16:03.198 ] 00:16:03.198 }' 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:03.198 [2024-12-06 11:59:00.685018] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:03.198 [2024-12-06 11:59:00.722282] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:03.198 [2024-12-06 11:59:00.722333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.198 [2024-12-06 11:59:00.722347] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:03.198 [2024-12-06 11:59:00.722355] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.198 "name": "raid_bdev1", 00:16:03.198 "uuid": "cf5e0d19-9a7e-4ca4-84dc-bb090783f7e4", 00:16:03.198 "strip_size_kb": 0, 00:16:03.198 "state": "online", 00:16:03.198 "raid_level": "raid1", 00:16:03.198 "superblock": true, 00:16:03.198 "num_base_bdevs": 2, 00:16:03.198 "num_base_bdevs_discovered": 1, 00:16:03.198 "num_base_bdevs_operational": 1, 00:16:03.198 "base_bdevs_list": [ 00:16:03.198 { 00:16:03.198 "name": null, 00:16:03.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.198 "is_configured": false, 00:16:03.198 "data_offset": 0, 00:16:03.198 "data_size": 7936 00:16:03.198 }, 00:16:03.198 { 00:16:03.198 "name": "BaseBdev2", 00:16:03.198 "uuid": "ae0f32db-41d3-5f6e-a0a5-3adb8c47c541", 00:16:03.198 "is_configured": true, 00:16:03.198 "data_offset": 256, 00:16:03.198 "data_size": 7936 00:16:03.198 } 00:16:03.198 ] 00:16:03.198 }' 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.198 11:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:03.456 11:59:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:03.456 11:59:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.456 11:59:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:03.456 11:59:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:03.456 11:59:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.456 11:59:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.456 11:59:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.457 11:59:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.457 11:59:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:03.457 11:59:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.716 11:59:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.716 "name": "raid_bdev1", 00:16:03.716 "uuid": "cf5e0d19-9a7e-4ca4-84dc-bb090783f7e4", 00:16:03.716 "strip_size_kb": 0, 00:16:03.716 "state": "online", 00:16:03.716 "raid_level": "raid1", 00:16:03.716 "superblock": true, 00:16:03.716 "num_base_bdevs": 2, 00:16:03.716 "num_base_bdevs_discovered": 1, 00:16:03.716 "num_base_bdevs_operational": 1, 00:16:03.716 "base_bdevs_list": [ 00:16:03.716 { 00:16:03.716 "name": null, 00:16:03.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.716 "is_configured": false, 00:16:03.716 "data_offset": 0, 00:16:03.716 "data_size": 7936 00:16:03.716 }, 00:16:03.716 { 00:16:03.716 "name": "BaseBdev2", 00:16:03.716 "uuid": "ae0f32db-41d3-5f6e-a0a5-3adb8c47c541", 00:16:03.716 "is_configured": true, 00:16:03.716 "data_offset": 256, 00:16:03.716 "data_size": 7936 00:16:03.716 } 00:16:03.716 ] 00:16:03.716 }' 00:16:03.716 11:59:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.716 11:59:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:03.716 11:59:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.716 11:59:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:03.716 11:59:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:03.716 11:59:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.716 11:59:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:03.716 11:59:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.716 11:59:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:03.716 11:59:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.716 11:59:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:03.716 [2024-12-06 11:59:01.332474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:03.716 [2024-12-06 11:59:01.332532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.716 [2024-12-06 11:59:01.332567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:03.716 [2024-12-06 11:59:01.332578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.716 [2024-12-06 11:59:01.332723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.716 [2024-12-06 11:59:01.332738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:03.716 [2024-12-06 11:59:01.332780] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:03.716 [2024-12-06 11:59:01.332793] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:03.716 [2024-12-06 11:59:01.332800] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:03.716 [2024-12-06 11:59:01.332812] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:03.716 BaseBdev1 00:16:03.716 11:59:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.716 11:59:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:04.654 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:04.654 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.654 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.654 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.654 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.654 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:04.654 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.654 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.654 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.654 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.654 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.654 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.654 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.654 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.654 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.654 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.654 "name": "raid_bdev1", 00:16:04.654 "uuid": "cf5e0d19-9a7e-4ca4-84dc-bb090783f7e4", 00:16:04.655 "strip_size_kb": 0, 00:16:04.655 "state": "online", 00:16:04.655 "raid_level": "raid1", 00:16:04.655 "superblock": true, 00:16:04.655 "num_base_bdevs": 2, 00:16:04.655 "num_base_bdevs_discovered": 1, 00:16:04.655 "num_base_bdevs_operational": 1, 00:16:04.655 "base_bdevs_list": [ 00:16:04.655 { 00:16:04.655 "name": null, 00:16:04.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.655 "is_configured": false, 00:16:04.655 "data_offset": 0, 00:16:04.655 "data_size": 7936 00:16:04.655 }, 00:16:04.655 { 00:16:04.655 "name": "BaseBdev2", 00:16:04.655 "uuid": "ae0f32db-41d3-5f6e-a0a5-3adb8c47c541", 00:16:04.655 "is_configured": true, 00:16:04.655 "data_offset": 256, 00:16:04.655 "data_size": 7936 00:16:04.655 } 00:16:04.655 ] 00:16:04.655 }' 00:16:04.655 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.655 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:05.222 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:05.222 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.222 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:05.222 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:05.222 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.222 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.222 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.222 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.222 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:05.222 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.222 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.222 "name": "raid_bdev1", 00:16:05.222 "uuid": "cf5e0d19-9a7e-4ca4-84dc-bb090783f7e4", 00:16:05.222 "strip_size_kb": 0, 00:16:05.222 "state": "online", 00:16:05.222 "raid_level": "raid1", 00:16:05.222 "superblock": true, 00:16:05.222 "num_base_bdevs": 2, 00:16:05.222 "num_base_bdevs_discovered": 1, 00:16:05.222 "num_base_bdevs_operational": 1, 00:16:05.222 "base_bdevs_list": [ 00:16:05.222 { 00:16:05.222 "name": null, 00:16:05.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.222 "is_configured": false, 00:16:05.222 "data_offset": 0, 00:16:05.222 "data_size": 7936 00:16:05.222 }, 00:16:05.222 { 00:16:05.222 "name": "BaseBdev2", 00:16:05.222 "uuid": "ae0f32db-41d3-5f6e-a0a5-3adb8c47c541", 00:16:05.222 "is_configured": true, 00:16:05.222 "data_offset": 256, 00:16:05.222 "data_size": 7936 00:16:05.222 } 00:16:05.222 ] 00:16:05.222 }' 00:16:05.222 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.222 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:05.222 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.222 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:05.222 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:05.222 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:16:05.222 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:05.222 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:05.223 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:05.223 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:05.223 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:05.223 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:05.223 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.223 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:05.223 [2024-12-06 11:59:02.825933] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:05.223 [2024-12-06 11:59:02.826065] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:05.223 [2024-12-06 11:59:02.826077] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:05.223 request: 00:16:05.223 { 00:16:05.223 "base_bdev": "BaseBdev1", 00:16:05.223 "raid_bdev": "raid_bdev1", 00:16:05.223 "method": "bdev_raid_add_base_bdev", 00:16:05.223 "req_id": 1 00:16:05.223 } 00:16:05.223 Got JSON-RPC error response 00:16:05.223 response: 00:16:05.223 { 00:16:05.223 "code": -22, 00:16:05.223 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:05.223 } 00:16:05.223 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:05.223 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:16:05.223 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:05.223 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:05.223 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:05.223 11:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:06.159 11:59:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:06.159 11:59:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.159 11:59:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.159 11:59:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.159 11:59:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.159 11:59:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:06.159 11:59:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.159 11:59:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.159 11:59:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.159 11:59:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.159 11:59:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.159 11:59:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.159 11:59:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.159 11:59:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.159 11:59:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.159 11:59:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.159 "name": "raid_bdev1", 00:16:06.159 "uuid": "cf5e0d19-9a7e-4ca4-84dc-bb090783f7e4", 00:16:06.159 "strip_size_kb": 0, 00:16:06.159 "state": "online", 00:16:06.159 "raid_level": "raid1", 00:16:06.159 "superblock": true, 00:16:06.159 "num_base_bdevs": 2, 00:16:06.159 "num_base_bdevs_discovered": 1, 00:16:06.159 "num_base_bdevs_operational": 1, 00:16:06.159 "base_bdevs_list": [ 00:16:06.159 { 00:16:06.159 "name": null, 00:16:06.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.159 "is_configured": false, 00:16:06.159 "data_offset": 0, 00:16:06.159 "data_size": 7936 00:16:06.159 }, 00:16:06.159 { 00:16:06.159 "name": "BaseBdev2", 00:16:06.159 "uuid": "ae0f32db-41d3-5f6e-a0a5-3adb8c47c541", 00:16:06.159 "is_configured": true, 00:16:06.159 "data_offset": 256, 00:16:06.159 "data_size": 7936 00:16:06.159 } 00:16:06.159 ] 00:16:06.159 }' 00:16:06.159 11:59:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.159 11:59:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.729 11:59:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:06.729 11:59:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.729 11:59:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:06.729 11:59:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:06.729 11:59:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.729 11:59:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.729 11:59:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.729 11:59:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.729 11:59:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.729 11:59:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.729 11:59:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.729 "name": "raid_bdev1", 00:16:06.729 "uuid": "cf5e0d19-9a7e-4ca4-84dc-bb090783f7e4", 00:16:06.730 "strip_size_kb": 0, 00:16:06.730 "state": "online", 00:16:06.730 "raid_level": "raid1", 00:16:06.730 "superblock": true, 00:16:06.730 "num_base_bdevs": 2, 00:16:06.730 "num_base_bdevs_discovered": 1, 00:16:06.730 "num_base_bdevs_operational": 1, 00:16:06.730 "base_bdevs_list": [ 00:16:06.730 { 00:16:06.730 "name": null, 00:16:06.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.730 "is_configured": false, 00:16:06.730 "data_offset": 0, 00:16:06.730 "data_size": 7936 00:16:06.730 }, 00:16:06.730 { 00:16:06.730 "name": "BaseBdev2", 00:16:06.730 "uuid": "ae0f32db-41d3-5f6e-a0a5-3adb8c47c541", 00:16:06.730 "is_configured": true, 00:16:06.730 "data_offset": 256, 00:16:06.730 "data_size": 7936 00:16:06.730 } 00:16:06.730 ] 00:16:06.730 }' 00:16:06.730 11:59:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.730 11:59:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:06.730 11:59:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.730 11:59:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:06.730 11:59:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 98913 00:16:06.730 11:59:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 98913 ']' 00:16:06.730 11:59:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 98913 00:16:06.730 11:59:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:06.730 11:59:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:06.730 11:59:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98913 00:16:06.990 11:59:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:06.991 11:59:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:06.991 killing process with pid 98913 00:16:06.991 Received shutdown signal, test time was about 60.000000 seconds 00:16:06.991 00:16:06.991 Latency(us) 00:16:06.991 [2024-12-06T11:59:04.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.991 [2024-12-06T11:59:04.749Z] =================================================================================================================== 00:16:06.991 [2024-12-06T11:59:04.749Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:06.991 11:59:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98913' 00:16:06.991 11:59:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 98913 00:16:06.991 [2024-12-06 11:59:04.501453] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:06.991 [2024-12-06 11:59:04.501554] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.991 [2024-12-06 11:59:04.501599] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.991 [2024-12-06 11:59:04.501608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:16:06.991 11:59:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 98913 00:16:06.991 [2024-12-06 11:59:04.535326] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:07.252 11:59:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:16:07.252 00:16:07.252 real 0m15.815s 00:16:07.252 user 0m20.962s 00:16:07.252 sys 0m1.599s 00:16:07.252 11:59:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:07.252 ************************************ 00:16:07.252 END TEST raid_rebuild_test_sb_md_interleaved 00:16:07.252 ************************************ 00:16:07.252 11:59:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:07.252 11:59:04 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:16:07.252 11:59:04 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:16:07.252 11:59:04 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 98913 ']' 00:16:07.252 11:59:04 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 98913 00:16:07.252 11:59:04 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:16:07.252 00:16:07.252 real 9m48.703s 00:16:07.252 user 13m57.277s 00:16:07.252 sys 1m44.473s 00:16:07.252 ************************************ 00:16:07.252 END TEST bdev_raid 00:16:07.252 ************************************ 00:16:07.252 11:59:04 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:07.252 11:59:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:07.252 11:59:04 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:07.252 11:59:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:07.252 11:59:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:07.252 11:59:04 -- common/autotest_common.sh@10 -- # set +x 00:16:07.252 ************************************ 00:16:07.252 START TEST spdkcli_raid 00:16:07.252 ************************************ 00:16:07.252 11:59:04 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:07.522 * Looking for test storage... 00:16:07.522 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:07.522 11:59:05 spdkcli_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:07.522 11:59:05 spdkcli_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:16:07.522 11:59:05 spdkcli_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:07.522 11:59:05 spdkcli_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:07.522 11:59:05 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:07.522 11:59:05 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:07.522 11:59:05 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:07.522 11:59:05 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:16:07.522 11:59:05 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:16:07.522 11:59:05 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:16:07.522 11:59:05 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:16:07.522 11:59:05 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:16:07.522 11:59:05 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:16:07.522 11:59:05 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:16:07.522 11:59:05 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:07.522 11:59:05 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:16:07.522 11:59:05 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:16:07.522 11:59:05 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:07.522 11:59:05 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:07.522 11:59:05 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:16:07.522 11:59:05 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:16:07.522 11:59:05 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:07.522 11:59:05 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:16:07.522 11:59:05 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:07.522 11:59:05 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:16:07.522 11:59:05 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:16:07.522 11:59:05 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:07.522 11:59:05 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:16:07.522 11:59:05 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:07.522 11:59:05 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:07.522 11:59:05 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:07.522 11:59:05 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:16:07.522 11:59:05 spdkcli_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:07.522 11:59:05 spdkcli_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:07.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.522 --rc genhtml_branch_coverage=1 00:16:07.522 --rc genhtml_function_coverage=1 00:16:07.522 --rc genhtml_legend=1 00:16:07.522 --rc geninfo_all_blocks=1 00:16:07.522 --rc geninfo_unexecuted_blocks=1 00:16:07.522 00:16:07.522 ' 00:16:07.522 11:59:05 spdkcli_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:07.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.522 --rc genhtml_branch_coverage=1 00:16:07.522 --rc genhtml_function_coverage=1 00:16:07.522 --rc genhtml_legend=1 00:16:07.522 --rc geninfo_all_blocks=1 00:16:07.522 --rc geninfo_unexecuted_blocks=1 00:16:07.522 00:16:07.522 ' 00:16:07.522 11:59:05 spdkcli_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:07.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.522 --rc genhtml_branch_coverage=1 00:16:07.522 --rc genhtml_function_coverage=1 00:16:07.522 --rc genhtml_legend=1 00:16:07.522 --rc geninfo_all_blocks=1 00:16:07.522 --rc geninfo_unexecuted_blocks=1 00:16:07.522 00:16:07.522 ' 00:16:07.522 11:59:05 spdkcli_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:07.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.522 --rc genhtml_branch_coverage=1 00:16:07.522 --rc genhtml_function_coverage=1 00:16:07.522 --rc genhtml_legend=1 00:16:07.522 --rc geninfo_all_blocks=1 00:16:07.522 --rc geninfo_unexecuted_blocks=1 00:16:07.522 00:16:07.522 ' 00:16:07.522 11:59:05 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:07.522 11:59:05 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:07.522 11:59:05 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:07.523 11:59:05 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:16:07.523 11:59:05 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:16:07.523 11:59:05 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:16:07.523 11:59:05 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:16:07.523 11:59:05 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:16:07.523 11:59:05 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:16:07.523 11:59:05 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:16:07.523 11:59:05 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:16:07.523 11:59:05 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:16:07.523 11:59:05 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:16:07.523 11:59:05 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:16:07.523 11:59:05 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:16:07.523 11:59:05 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:16:07.523 11:59:05 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:16:07.523 11:59:05 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:16:07.523 11:59:05 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:16:07.523 11:59:05 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:16:07.523 11:59:05 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:16:07.523 11:59:05 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:16:07.523 11:59:05 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:16:07.523 11:59:05 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:16:07.523 11:59:05 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:16:07.523 11:59:05 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:07.523 11:59:05 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:07.523 11:59:05 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:07.523 11:59:05 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:07.523 11:59:05 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:07.523 11:59:05 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:07.523 11:59:05 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:16:07.523 11:59:05 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:16:07.523 11:59:05 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:07.523 11:59:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:07.523 11:59:05 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:16:07.523 11:59:05 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=99577 00:16:07.523 11:59:05 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:16:07.523 11:59:05 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 99577 00:16:07.523 11:59:05 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 99577 ']' 00:16:07.523 11:59:05 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.523 11:59:05 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:07.523 11:59:05 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.523 11:59:05 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:07.523 11:59:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:07.787 [2024-12-06 11:59:05.302517] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:07.787 [2024-12-06 11:59:05.302721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99577 ] 00:16:07.787 [2024-12-06 11:59:05.449118] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:07.787 [2024-12-06 11:59:05.497355] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.787 [2024-12-06 11:59:05.497444] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.356 11:59:06 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:08.356 11:59:06 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:16:08.356 11:59:06 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:16:08.356 11:59:06 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:08.356 11:59:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:08.615 11:59:06 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:16:08.615 11:59:06 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:08.615 11:59:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:08.615 11:59:06 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:16:08.615 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:16:08.615 ' 00:16:09.996 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:16:09.996 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:16:09.996 11:59:07 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:16:09.996 11:59:07 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:09.996 11:59:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:10.255 11:59:07 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:16:10.255 11:59:07 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:10.255 11:59:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:10.255 11:59:07 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:16:10.255 ' 00:16:11.191 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:16:11.449 11:59:08 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:16:11.449 11:59:08 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:11.449 11:59:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:11.449 11:59:09 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:16:11.449 11:59:09 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:11.449 11:59:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:11.449 11:59:09 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:16:11.449 11:59:09 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:16:12.016 11:59:09 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:16:12.016 11:59:09 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:16:12.016 11:59:09 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:16:12.016 11:59:09 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:12.016 11:59:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:12.016 11:59:09 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:16:12.016 11:59:09 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:12.016 11:59:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:12.016 11:59:09 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:16:12.016 ' 00:16:12.952 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:16:12.952 11:59:10 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:16:12.952 11:59:10 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:12.952 11:59:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:13.212 11:59:10 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:16:13.212 11:59:10 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:13.212 11:59:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:13.212 11:59:10 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:16:13.212 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:16:13.212 ' 00:16:14.589 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:16:14.589 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:16:14.589 11:59:12 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:16:14.589 11:59:12 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:14.589 11:59:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:14.589 11:59:12 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 99577 00:16:14.589 11:59:12 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 99577 ']' 00:16:14.589 11:59:12 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 99577 00:16:14.589 11:59:12 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:16:14.589 11:59:12 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:14.589 11:59:12 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99577 00:16:14.589 killing process with pid 99577 00:16:14.589 11:59:12 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:14.589 11:59:12 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:14.589 11:59:12 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99577' 00:16:14.589 11:59:12 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 99577 00:16:14.589 11:59:12 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 99577 00:16:15.160 11:59:12 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:16:15.160 Process with pid 99577 is not found 00:16:15.160 11:59:12 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 99577 ']' 00:16:15.160 11:59:12 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 99577 00:16:15.160 11:59:12 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 99577 ']' 00:16:15.160 11:59:12 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 99577 00:16:15.160 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (99577) - No such process 00:16:15.160 11:59:12 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 99577 is not found' 00:16:15.160 11:59:12 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:16:15.160 11:59:12 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:16:15.160 11:59:12 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:16:15.160 11:59:12 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:16:15.160 00:16:15.160 real 0m7.757s 00:16:15.160 user 0m16.305s 00:16:15.160 sys 0m1.129s 00:16:15.160 11:59:12 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:15.160 11:59:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:15.160 ************************************ 00:16:15.160 END TEST spdkcli_raid 00:16:15.160 ************************************ 00:16:15.160 11:59:12 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:16:15.160 11:59:12 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:15.160 11:59:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:15.160 11:59:12 -- common/autotest_common.sh@10 -- # set +x 00:16:15.160 ************************************ 00:16:15.160 START TEST blockdev_raid5f 00:16:15.160 ************************************ 00:16:15.160 11:59:12 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:16:15.160 * Looking for test storage... 00:16:15.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:15.160 11:59:12 blockdev_raid5f -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:15.160 11:59:12 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lcov --version 00:16:15.160 11:59:12 blockdev_raid5f -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:15.432 11:59:12 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:15.432 11:59:12 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:15.432 11:59:12 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:15.432 11:59:12 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:15.432 11:59:12 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:16:15.432 11:59:12 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:16:15.432 11:59:12 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:16:15.432 11:59:12 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:16:15.432 11:59:12 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:16:15.432 11:59:12 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:16:15.432 11:59:12 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:16:15.432 11:59:12 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:15.432 11:59:12 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:16:15.432 11:59:12 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:16:15.432 11:59:12 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:15.432 11:59:12 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:15.432 11:59:12 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:16:15.432 11:59:12 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:16:15.432 11:59:12 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:15.432 11:59:12 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:16:15.432 11:59:12 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:16:15.432 11:59:12 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:16:15.432 11:59:12 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:16:15.432 11:59:12 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:15.432 11:59:12 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:16:15.432 11:59:12 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:16:15.432 11:59:12 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:15.432 11:59:12 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:15.432 11:59:12 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:16:15.432 11:59:12 blockdev_raid5f -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:15.432 11:59:12 blockdev_raid5f -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:15.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.432 --rc genhtml_branch_coverage=1 00:16:15.432 --rc genhtml_function_coverage=1 00:16:15.432 --rc genhtml_legend=1 00:16:15.432 --rc geninfo_all_blocks=1 00:16:15.432 --rc geninfo_unexecuted_blocks=1 00:16:15.432 00:16:15.432 ' 00:16:15.432 11:59:12 blockdev_raid5f -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:15.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.432 --rc genhtml_branch_coverage=1 00:16:15.432 --rc genhtml_function_coverage=1 00:16:15.432 --rc genhtml_legend=1 00:16:15.432 --rc geninfo_all_blocks=1 00:16:15.432 --rc geninfo_unexecuted_blocks=1 00:16:15.432 00:16:15.432 ' 00:16:15.432 11:59:12 blockdev_raid5f -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:15.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.432 --rc genhtml_branch_coverage=1 00:16:15.432 --rc genhtml_function_coverage=1 00:16:15.432 --rc genhtml_legend=1 00:16:15.432 --rc geninfo_all_blocks=1 00:16:15.432 --rc geninfo_unexecuted_blocks=1 00:16:15.432 00:16:15.432 ' 00:16:15.432 11:59:12 blockdev_raid5f -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:15.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.432 --rc genhtml_branch_coverage=1 00:16:15.432 --rc genhtml_function_coverage=1 00:16:15.432 --rc genhtml_legend=1 00:16:15.432 --rc geninfo_all_blocks=1 00:16:15.432 --rc geninfo_unexecuted_blocks=1 00:16:15.432 00:16:15.432 ' 00:16:15.432 11:59:12 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:15.432 11:59:12 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:16:15.432 11:59:12 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:15.432 11:59:12 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:15.432 11:59:12 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:15.432 11:59:12 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:15.432 11:59:12 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:15.432 11:59:12 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:15.432 11:59:12 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:16:15.432 11:59:13 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:16:15.432 11:59:13 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:16:15.432 11:59:13 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:16:15.432 11:59:13 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:16:15.432 11:59:13 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:16:15.432 11:59:13 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:16:15.432 11:59:13 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:16:15.432 11:59:13 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:16:15.432 11:59:13 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:16:15.432 11:59:13 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:16:15.432 11:59:13 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:16:15.432 11:59:13 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:16:15.432 11:59:13 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:16:15.432 11:59:13 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:16:15.432 11:59:13 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:16:15.432 11:59:13 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=99830 00:16:15.432 11:59:13 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:15.432 11:59:13 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 99830 00:16:15.432 11:59:13 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:15.432 11:59:13 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 99830 ']' 00:16:15.432 11:59:13 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.432 11:59:13 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:15.432 11:59:13 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.432 11:59:13 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:15.432 11:59:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:15.432 [2024-12-06 11:59:13.105343] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:15.432 [2024-12-06 11:59:13.105570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99830 ] 00:16:15.740 [2024-12-06 11:59:13.248467] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.740 [2024-12-06 11:59:13.294700] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.328 11:59:13 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:16.328 11:59:13 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:16:16.328 11:59:13 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:16:16.328 11:59:13 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:16:16.328 11:59:13 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:16:16.328 11:59:13 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.328 11:59:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:16.328 Malloc0 00:16:16.328 Malloc1 00:16:16.328 Malloc2 00:16:16.328 11:59:13 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.328 11:59:13 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:16:16.328 11:59:13 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.328 11:59:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:16.328 11:59:13 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.328 11:59:13 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:16:16.328 11:59:13 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:16:16.328 11:59:13 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.328 11:59:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:16.328 11:59:13 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.328 11:59:14 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:16:16.328 11:59:14 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.328 11:59:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:16.328 11:59:14 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.328 11:59:14 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:16.328 11:59:14 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.328 11:59:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:16.328 11:59:14 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.328 11:59:14 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:16:16.328 11:59:14 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:16:16.328 11:59:14 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:16:16.328 11:59:14 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.328 11:59:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:16.589 11:59:14 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.589 11:59:14 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:16:16.589 11:59:14 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:16:16.589 11:59:14 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "95fd59b0-c23b-4f90-b546-3996c1df818f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "95fd59b0-c23b-4f90-b546-3996c1df818f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "95fd59b0-c23b-4f90-b546-3996c1df818f",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "fa3357bc-cd57-4ad6-ae6f-cb996be970a1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "344b8ba2-05ef-42c5-9916-c594e7b8b78a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "67fa823f-e32c-4b9c-8b5d-2c0d0ecdb3e4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:16:16.589 11:59:14 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:16:16.589 11:59:14 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:16:16.589 11:59:14 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:16:16.589 11:59:14 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 99830 00:16:16.589 11:59:14 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 99830 ']' 00:16:16.589 11:59:14 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 99830 00:16:16.589 11:59:14 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:16:16.589 11:59:14 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:16.589 11:59:14 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99830 00:16:16.589 killing process with pid 99830 00:16:16.589 11:59:14 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:16.589 11:59:14 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:16.589 11:59:14 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99830' 00:16:16.589 11:59:14 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 99830 00:16:16.589 11:59:14 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 99830 00:16:16.849 11:59:14 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:16.849 11:59:14 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:16:16.849 11:59:14 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:16.849 11:59:14 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:16.849 11:59:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:17.108 ************************************ 00:16:17.108 START TEST bdev_hello_world 00:16:17.108 ************************************ 00:16:17.108 11:59:14 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:16:17.108 [2024-12-06 11:59:14.686459] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:17.108 [2024-12-06 11:59:14.686588] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99871 ] 00:16:17.108 [2024-12-06 11:59:14.832050] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.368 [2024-12-06 11:59:14.880641] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.368 [2024-12-06 11:59:15.078319] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:16:17.368 [2024-12-06 11:59:15.078380] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:16:17.368 [2024-12-06 11:59:15.078399] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:16:17.368 [2024-12-06 11:59:15.078735] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:16:17.368 [2024-12-06 11:59:15.078868] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:16:17.368 [2024-12-06 11:59:15.078888] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:16:17.368 [2024-12-06 11:59:15.078954] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:16:17.368 00:16:17.368 [2024-12-06 11:59:15.078973] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:16:17.627 ************************************ 00:16:17.627 END TEST bdev_hello_world 00:16:17.627 ************************************ 00:16:17.627 00:16:17.627 real 0m0.724s 00:16:17.627 user 0m0.384s 00:16:17.627 sys 0m0.223s 00:16:17.627 11:59:15 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:17.627 11:59:15 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:17.886 11:59:15 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:16:17.886 11:59:15 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:17.886 11:59:15 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:17.886 11:59:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:17.886 ************************************ 00:16:17.886 START TEST bdev_bounds 00:16:17.886 ************************************ 00:16:17.886 11:59:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:16:17.886 11:59:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=99895 00:16:17.886 11:59:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:17.886 11:59:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:16:17.886 11:59:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 99895' 00:16:17.886 Process bdevio pid: 99895 00:16:17.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.886 11:59:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 99895 00:16:17.886 11:59:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 99895 ']' 00:16:17.886 11:59:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.886 11:59:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:17.886 11:59:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.886 11:59:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:17.886 11:59:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:17.886 [2024-12-06 11:59:15.487368] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:17.886 [2024-12-06 11:59:15.487521] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99895 ] 00:16:17.886 [2024-12-06 11:59:15.632002] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:18.143 [2024-12-06 11:59:15.678418] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.143 [2024-12-06 11:59:15.678500] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.143 [2024-12-06 11:59:15.678569] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.708 11:59:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:18.708 11:59:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:16:18.708 11:59:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:18.708 I/O targets: 00:16:18.708 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:16:18.708 00:16:18.708 00:16:18.708 CUnit - A unit testing framework for C - Version 2.1-3 00:16:18.708 http://cunit.sourceforge.net/ 00:16:18.708 00:16:18.708 00:16:18.708 Suite: bdevio tests on: raid5f 00:16:18.708 Test: blockdev write read block ...passed 00:16:18.708 Test: blockdev write zeroes read block ...passed 00:16:18.708 Test: blockdev write zeroes read no split ...passed 00:16:18.708 Test: blockdev write zeroes read split ...passed 00:16:18.965 Test: blockdev write zeroes read split partial ...passed 00:16:18.965 Test: blockdev reset ...passed 00:16:18.965 Test: blockdev write read 8 blocks ...passed 00:16:18.965 Test: blockdev write read size > 128k ...passed 00:16:18.965 Test: blockdev write read invalid size ...passed 00:16:18.965 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:18.965 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:18.965 Test: blockdev write read max offset ...passed 00:16:18.965 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:18.965 Test: blockdev writev readv 8 blocks ...passed 00:16:18.965 Test: blockdev writev readv 30 x 1block ...passed 00:16:18.965 Test: blockdev writev readv block ...passed 00:16:18.965 Test: blockdev writev readv size > 128k ...passed 00:16:18.965 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:18.965 Test: blockdev comparev and writev ...passed 00:16:18.965 Test: blockdev nvme passthru rw ...passed 00:16:18.965 Test: blockdev nvme passthru vendor specific ...passed 00:16:18.965 Test: blockdev nvme admin passthru ...passed 00:16:18.965 Test: blockdev copy ...passed 00:16:18.965 00:16:18.965 Run Summary: Type Total Ran Passed Failed Inactive 00:16:18.965 suites 1 1 n/a 0 0 00:16:18.965 tests 23 23 23 0 0 00:16:18.965 asserts 130 130 130 0 n/a 00:16:18.965 00:16:18.965 Elapsed time = 0.315 seconds 00:16:18.965 0 00:16:18.965 11:59:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 99895 00:16:18.965 11:59:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 99895 ']' 00:16:18.965 11:59:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 99895 00:16:18.965 11:59:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:16:18.965 11:59:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:18.965 11:59:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99895 00:16:18.965 killing process with pid 99895 00:16:18.965 11:59:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:18.965 11:59:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:18.965 11:59:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99895' 00:16:18.965 11:59:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 99895 00:16:18.965 11:59:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 99895 00:16:19.223 ************************************ 00:16:19.224 END TEST bdev_bounds 00:16:19.224 ************************************ 00:16:19.224 11:59:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:16:19.224 00:16:19.224 real 0m1.441s 00:16:19.224 user 0m3.469s 00:16:19.224 sys 0m0.320s 00:16:19.224 11:59:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:19.224 11:59:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:19.224 11:59:16 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:16:19.224 11:59:16 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:19.224 11:59:16 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:19.224 11:59:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:19.224 ************************************ 00:16:19.224 START TEST bdev_nbd 00:16:19.224 ************************************ 00:16:19.224 11:59:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:16:19.224 11:59:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:16:19.224 11:59:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:16:19.224 11:59:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:19.224 11:59:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:19.224 11:59:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:16:19.224 11:59:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:16:19.224 11:59:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:16:19.224 11:59:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:16:19.224 11:59:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:16:19.224 11:59:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:16:19.224 11:59:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:16:19.224 11:59:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:16:19.224 11:59:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:16:19.224 11:59:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:16:19.224 11:59:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:16:19.224 11:59:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=99948 00:16:19.224 11:59:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:16:19.224 11:59:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:19.224 11:59:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 99948 /var/tmp/spdk-nbd.sock 00:16:19.224 11:59:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 99948 ']' 00:16:19.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:19.224 11:59:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:19.224 11:59:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:19.224 11:59:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:19.224 11:59:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:19.224 11:59:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:19.484 [2024-12-06 11:59:17.011574] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:19.484 [2024-12-06 11:59:17.011783] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.484 [2024-12-06 11:59:17.152133] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.484 [2024-12-06 11:59:17.196341] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.425 11:59:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:20.425 11:59:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:16:20.425 11:59:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:16:20.425 11:59:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:20.425 11:59:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:16:20.425 11:59:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:16:20.425 11:59:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:16:20.425 11:59:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:20.425 11:59:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:16:20.425 11:59:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:16:20.425 11:59:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:16:20.425 11:59:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:16:20.425 11:59:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:16:20.425 11:59:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:16:20.425 11:59:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:16:20.425 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:16:20.425 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:16:20.425 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:16:20.425 11:59:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:20.425 11:59:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:20.425 11:59:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:20.425 11:59:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:20.425 11:59:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:20.425 11:59:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:20.425 11:59:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:20.425 11:59:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:20.425 11:59:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:20.425 1+0 records in 00:16:20.425 1+0 records out 00:16:20.425 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474223 s, 8.6 MB/s 00:16:20.425 11:59:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.425 11:59:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:20.425 11:59:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.425 11:59:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:20.425 11:59:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:20.425 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:20.425 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:16:20.426 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:20.686 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:16:20.686 { 00:16:20.686 "nbd_device": "/dev/nbd0", 00:16:20.686 "bdev_name": "raid5f" 00:16:20.686 } 00:16:20.686 ]' 00:16:20.686 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:16:20.686 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:16:20.686 { 00:16:20.686 "nbd_device": "/dev/nbd0", 00:16:20.686 "bdev_name": "raid5f" 00:16:20.686 } 00:16:20.686 ]' 00:16:20.686 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:16:20.686 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:20.686 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:20.686 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:20.686 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:20.686 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:20.686 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:20.686 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:20.945 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:20.946 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:20.946 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:20.946 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:20.946 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:20.946 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:20.946 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:20.946 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:20.946 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:20.946 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:20.946 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:21.206 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:21.206 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:21.206 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:21.206 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:21.206 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:21.206 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:21.206 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:21.206 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:21.206 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:21.206 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:21.206 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:21.206 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:21.206 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:16:21.206 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:21.206 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:16:21.206 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:21.206 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:16:21.206 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:21.206 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:16:21.206 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:21.206 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:16:21.206 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:21.206 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:21.206 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:21.206 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:21.206 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:21.206 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:21.206 11:59:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:16:21.466 /dev/nbd0 00:16:21.466 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:21.466 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:21.466 11:59:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:21.466 11:59:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:21.466 11:59:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:21.466 11:59:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:21.466 11:59:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:21.466 11:59:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:21.466 11:59:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:21.466 11:59:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:21.466 11:59:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:21.466 1+0 records in 00:16:21.466 1+0 records out 00:16:21.466 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366169 s, 11.2 MB/s 00:16:21.466 11:59:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:21.466 11:59:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:21.466 11:59:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:21.466 11:59:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:21.466 11:59:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:21.466 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:21.466 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:21.466 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:21.466 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:21.466 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:21.727 { 00:16:21.727 "nbd_device": "/dev/nbd0", 00:16:21.727 "bdev_name": "raid5f" 00:16:21.727 } 00:16:21.727 ]' 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:21.727 { 00:16:21.727 "nbd_device": "/dev/nbd0", 00:16:21.727 "bdev_name": "raid5f" 00:16:21.727 } 00:16:21.727 ]' 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:21.727 256+0 records in 00:16:21.727 256+0 records out 00:16:21.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00442427 s, 237 MB/s 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:21.727 256+0 records in 00:16:21.727 256+0 records out 00:16:21.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274809 s, 38.2 MB/s 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:21.727 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:21.988 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:21.988 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:21.988 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:21.988 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:21.988 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:21.988 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:21.988 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:21.988 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:21.988 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:21.988 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:21.988 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:22.248 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:22.248 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:22.248 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:22.248 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:22.248 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:22.248 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:22.248 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:22.248 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:22.248 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:22.248 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:22.248 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:22.248 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:22.248 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:22.248 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:22.248 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:22.248 11:59:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:22.248 malloc_lvol_verify 00:16:22.248 11:59:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:22.509 56181c94-9c23-4ea4-9e64-e69c5b016003 00:16:22.509 11:59:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:22.769 119b68cd-bb59-44ec-b940-8f3828f5f3c6 00:16:22.769 11:59:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:23.029 /dev/nbd0 00:16:23.029 11:59:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:23.029 11:59:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:23.029 11:59:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:23.029 11:59:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:23.030 11:59:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:23.030 mke2fs 1.47.0 (5-Feb-2023) 00:16:23.030 Discarding device blocks: 0/4096 done 00:16:23.030 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:23.030 00:16:23.030 Allocating group tables: 0/1 done 00:16:23.030 Writing inode tables: 0/1 done 00:16:23.030 Creating journal (1024 blocks): done 00:16:23.030 Writing superblocks and filesystem accounting information: 0/1 done 00:16:23.030 00:16:23.030 11:59:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:23.030 11:59:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:23.030 11:59:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:23.030 11:59:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:23.030 11:59:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:23.030 11:59:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:23.030 11:59:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:23.290 11:59:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:23.290 11:59:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:23.290 11:59:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:23.290 11:59:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:23.290 11:59:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:23.290 11:59:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:23.290 11:59:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:23.290 11:59:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:23.290 11:59:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 99948 00:16:23.290 11:59:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 99948 ']' 00:16:23.290 11:59:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 99948 00:16:23.290 11:59:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:16:23.290 11:59:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:23.290 11:59:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99948 00:16:23.290 11:59:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:23.290 11:59:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:23.290 killing process with pid 99948 00:16:23.290 11:59:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99948' 00:16:23.290 11:59:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 99948 00:16:23.290 11:59:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 99948 00:16:23.551 11:59:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:23.551 00:16:23.551 real 0m4.272s 00:16:23.551 user 0m6.223s 00:16:23.551 sys 0m1.191s 00:16:23.551 11:59:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:23.551 11:59:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:23.551 ************************************ 00:16:23.551 END TEST bdev_nbd 00:16:23.551 ************************************ 00:16:23.551 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:16:23.551 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:16:23.551 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:16:23.551 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:16:23.551 11:59:21 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:23.551 11:59:21 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:23.551 11:59:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:23.551 ************************************ 00:16:23.551 START TEST bdev_fio 00:16:23.551 ************************************ 00:16:23.551 11:59:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:16:23.551 11:59:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:23.551 11:59:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:23.551 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:23.551 11:59:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:23.551 11:59:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:23.551 11:59:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:23.551 11:59:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:23.551 11:59:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:23.551 11:59:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:23.551 11:59:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:16:23.551 11:59:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:16:23.551 11:59:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:16:23.551 11:59:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:16:23.551 11:59:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:23.551 11:59:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:16:23.551 11:59:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:16:23.551 11:59:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:23.551 11:59:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:16:23.551 11:59:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:16:23.551 11:59:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:16:23.551 11:59:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:16:23.551 11:59:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:16:23.812 11:59:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:23.812 11:59:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:16:23.812 11:59:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:23.812 11:59:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:16:23.812 11:59:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:16:23.812 11:59:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:23.812 11:59:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:23.812 11:59:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:16:23.812 11:59:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:23.812 11:59:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:23.812 ************************************ 00:16:23.812 START TEST bdev_fio_rw_verify 00:16:23.812 ************************************ 00:16:23.812 11:59:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:23.812 11:59:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:23.812 11:59:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:23.812 11:59:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:23.812 11:59:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:23.812 11:59:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:23.812 11:59:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:16:23.812 11:59:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:23.812 11:59:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:23.812 11:59:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:16:23.812 11:59:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:23.812 11:59:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:23.812 11:59:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:23.812 11:59:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:23.812 11:59:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:16:23.812 11:59:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:23.812 11:59:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:24.073 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:24.073 fio-3.35 00:16:24.073 Starting 1 thread 00:16:36.294 00:16:36.294 job_raid5f: (groupid=0, jobs=1): err= 0: pid=100132: Fri Dec 6 11:59:32 2024 00:16:36.294 read: IOPS=12.4k, BW=48.6MiB/s (51.0MB/s)(486MiB/10001msec) 00:16:36.294 slat (nsec): min=16725, max=71868, avg=18591.83, stdev=2081.63 00:16:36.294 clat (usec): min=11, max=380, avg=128.49, stdev=44.17 00:16:36.294 lat (usec): min=29, max=399, avg=147.08, stdev=44.39 00:16:36.294 clat percentiles (usec): 00:16:36.294 | 50.000th=[ 131], 99.000th=[ 212], 99.900th=[ 233], 99.990th=[ 265], 00:16:36.294 | 99.999th=[ 371] 00:16:36.294 write: IOPS=13.1k, BW=51.1MiB/s (53.6MB/s)(504MiB/9877msec); 0 zone resets 00:16:36.294 slat (usec): min=7, max=360, avg=16.28, stdev= 4.00 00:16:36.294 clat (usec): min=56, max=1460, avg=296.11, stdev=43.03 00:16:36.294 lat (usec): min=71, max=1711, avg=312.39, stdev=44.20 00:16:36.294 clat percentiles (usec): 00:16:36.294 | 50.000th=[ 297], 99.000th=[ 375], 99.900th=[ 627], 99.990th=[ 1254], 00:16:36.294 | 99.999th=[ 1385] 00:16:36.294 bw ( KiB/s): min=48152, max=54792, per=98.62%, avg=51582.74, stdev=1559.61, samples=19 00:16:36.294 iops : min=12038, max=13698, avg=12895.68, stdev=389.90, samples=19 00:16:36.294 lat (usec) : 20=0.01%, 50=0.01%, 100=15.81%, 250=40.23%, 500=43.88% 00:16:36.294 lat (usec) : 750=0.04%, 1000=0.02% 00:16:36.294 lat (msec) : 2=0.01% 00:16:36.294 cpu : usr=98.78%, sys=0.55%, ctx=22, majf=0, minf=13286 00:16:36.294 IO depths : 1=7.6%, 2=19.8%, 4=55.2%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:36.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.294 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.295 issued rwts: total=124427,129146,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.295 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:36.295 00:16:36.295 Run status group 0 (all jobs): 00:16:36.295 READ: bw=48.6MiB/s (51.0MB/s), 48.6MiB/s-48.6MiB/s (51.0MB/s-51.0MB/s), io=486MiB (510MB), run=10001-10001msec 00:16:36.295 WRITE: bw=51.1MiB/s (53.6MB/s), 51.1MiB/s-51.1MiB/s (53.6MB/s-53.6MB/s), io=504MiB (529MB), run=9877-9877msec 00:16:36.295 ----------------------------------------------------- 00:16:36.295 Suppressions used: 00:16:36.295 count bytes template 00:16:36.295 1 7 /usr/src/fio/parse.c 00:16:36.295 845 81120 /usr/src/fio/iolog.c 00:16:36.295 1 8 libtcmalloc_minimal.so 00:16:36.295 1 904 libcrypto.so 00:16:36.295 ----------------------------------------------------- 00:16:36.295 00:16:36.295 00:16:36.295 real 0m11.210s 00:16:36.295 user 0m11.613s 00:16:36.295 sys 0m0.664s 00:16:36.295 11:59:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:36.295 ************************************ 00:16:36.295 END TEST bdev_fio_rw_verify 00:16:36.295 ************************************ 00:16:36.295 11:59:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:36.295 11:59:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:36.295 11:59:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:36.295 11:59:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:36.295 11:59:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:36.295 11:59:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:16:36.295 11:59:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:16:36.295 11:59:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:16:36.295 11:59:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:16:36.295 11:59:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:36.295 11:59:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:16:36.295 11:59:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:16:36.295 11:59:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:36.295 11:59:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:16:36.295 11:59:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:16:36.295 11:59:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:16:36.295 11:59:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:16:36.295 11:59:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "95fd59b0-c23b-4f90-b546-3996c1df818f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "95fd59b0-c23b-4f90-b546-3996c1df818f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "95fd59b0-c23b-4f90-b546-3996c1df818f",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "fa3357bc-cd57-4ad6-ae6f-cb996be970a1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "344b8ba2-05ef-42c5-9916-c594e7b8b78a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "67fa823f-e32c-4b9c-8b5d-2c0d0ecdb3e4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:16:36.295 11:59:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:36.295 11:59:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:36.295 11:59:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:36.295 /home/vagrant/spdk_repo/spdk 00:16:36.295 11:59:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:36.295 11:59:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:36.295 11:59:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:36.295 00:16:36.295 real 0m11.503s 00:16:36.295 user 0m11.748s 00:16:36.295 sys 0m0.794s 00:16:36.295 11:59:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:36.295 11:59:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:36.295 ************************************ 00:16:36.295 END TEST bdev_fio 00:16:36.295 ************************************ 00:16:36.295 11:59:32 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:36.295 11:59:32 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:36.295 11:59:32 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:16:36.295 11:59:32 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:36.295 11:59:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:36.295 ************************************ 00:16:36.295 START TEST bdev_verify 00:16:36.295 ************************************ 00:16:36.295 11:59:32 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:36.295 [2024-12-06 11:59:32.931030] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:36.295 [2024-12-06 11:59:32.931149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100290 ] 00:16:36.295 [2024-12-06 11:59:33.077154] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:36.295 [2024-12-06 11:59:33.126802] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.295 [2024-12-06 11:59:33.126886] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:36.295 Running I/O for 5 seconds... 00:16:37.806 11167.00 IOPS, 43.62 MiB/s [2024-12-06T11:59:36.501Z] 11181.00 IOPS, 43.68 MiB/s [2024-12-06T11:59:37.441Z] 11220.67 IOPS, 43.83 MiB/s [2024-12-06T11:59:38.380Z] 11238.50 IOPS, 43.90 MiB/s 00:16:40.622 Latency(us) 00:16:40.622 [2024-12-06T11:59:38.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.622 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:40.622 Verification LBA range: start 0x0 length 0x2000 00:16:40.622 raid5f : 5.02 4462.37 17.43 0.00 0.00 43103.29 271.87 30907.81 00:16:40.622 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:40.622 Verification LBA range: start 0x2000 length 0x2000 00:16:40.622 raid5f : 5.01 6746.58 26.35 0.00 0.00 28487.63 354.15 20719.68 00:16:40.622 [2024-12-06T11:59:38.380Z] =================================================================================================================== 00:16:40.622 [2024-12-06T11:59:38.380Z] Total : 11208.96 43.78 0.00 0.00 34311.05 271.87 30907.81 00:16:40.882 00:16:40.882 real 0m5.753s 00:16:40.882 user 0m10.692s 00:16:40.882 sys 0m0.245s 00:16:40.882 11:59:38 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:40.882 11:59:38 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:40.882 ************************************ 00:16:40.882 END TEST bdev_verify 00:16:40.882 ************************************ 00:16:41.142 11:59:38 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:41.142 11:59:38 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:16:41.142 11:59:38 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:41.142 11:59:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:41.142 ************************************ 00:16:41.142 START TEST bdev_verify_big_io 00:16:41.142 ************************************ 00:16:41.142 11:59:38 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:41.142 [2024-12-06 11:59:38.758929] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:41.142 [2024-12-06 11:59:38.759064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100372 ] 00:16:41.402 [2024-12-06 11:59:38.904786] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:41.402 [2024-12-06 11:59:38.956701] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.402 [2024-12-06 11:59:38.956813] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.661 Running I/O for 5 seconds... 00:16:43.981 695.00 IOPS, 43.44 MiB/s [2024-12-06T11:59:42.677Z] 792.00 IOPS, 49.50 MiB/s [2024-12-06T11:59:43.617Z] 824.33 IOPS, 51.52 MiB/s [2024-12-06T11:59:44.557Z] 825.00 IOPS, 51.56 MiB/s 00:16:46.799 Latency(us) 00:16:46.799 [2024-12-06T11:59:44.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.799 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:46.799 Verification LBA range: start 0x0 length 0x200 00:16:46.799 raid5f : 5.29 359.80 22.49 0.00 0.00 8788456.02 217.32 373641.06 00:16:46.799 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:46.799 Verification LBA range: start 0x200 length 0x200 00:16:46.799 raid5f : 5.15 468.73 29.30 0.00 0.00 6833173.58 248.62 293051.81 00:16:46.799 [2024-12-06T11:59:44.557Z] =================================================================================================================== 00:16:46.799 [2024-12-06T11:59:44.557Z] Total : 828.53 51.78 0.00 0.00 7695291.86 217.32 373641.06 00:16:47.059 00:16:47.059 real 0m6.030s 00:16:47.059 user 0m11.229s 00:16:47.059 sys 0m0.250s 00:16:47.060 11:59:44 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:47.060 11:59:44 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.060 ************************************ 00:16:47.060 END TEST bdev_verify_big_io 00:16:47.060 ************************************ 00:16:47.060 11:59:44 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:47.060 11:59:44 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:16:47.060 11:59:44 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:47.060 11:59:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:47.060 ************************************ 00:16:47.060 START TEST bdev_write_zeroes 00:16:47.060 ************************************ 00:16:47.060 11:59:44 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:47.320 [2024-12-06 11:59:44.866069] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:47.320 [2024-12-06 11:59:44.866202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100454 ] 00:16:47.320 [2024-12-06 11:59:45.011955] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.320 [2024-12-06 11:59:45.065576] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.585 Running I/O for 1 seconds... 00:16:48.527 30303.00 IOPS, 118.37 MiB/s 00:16:48.527 Latency(us) 00:16:48.527 [2024-12-06T11:59:46.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.527 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:48.527 raid5f : 1.01 30256.63 118.19 0.00 0.00 4221.52 1366.53 6067.09 00:16:48.527 [2024-12-06T11:59:46.285Z] =================================================================================================================== 00:16:48.527 [2024-12-06T11:59:46.285Z] Total : 30256.63 118.19 0.00 0.00 4221.52 1366.53 6067.09 00:16:48.787 00:16:48.787 real 0m1.738s 00:16:48.787 user 0m1.398s 00:16:48.787 sys 0m0.219s 00:16:48.787 11:59:46 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:48.787 11:59:46 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:48.787 ************************************ 00:16:48.787 END TEST bdev_write_zeroes 00:16:48.787 ************************************ 00:16:49.047 11:59:46 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:49.047 11:59:46 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:16:49.047 11:59:46 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:49.047 11:59:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:49.047 ************************************ 00:16:49.047 START TEST bdev_json_nonenclosed 00:16:49.047 ************************************ 00:16:49.047 11:59:46 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:49.047 [2024-12-06 11:59:46.676107] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:49.047 [2024-12-06 11:59:46.676257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100496 ] 00:16:49.311 [2024-12-06 11:59:46.819090] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.311 [2024-12-06 11:59:46.874228] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.311 [2024-12-06 11:59:46.874367] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:49.311 [2024-12-06 11:59:46.874389] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:49.311 [2024-12-06 11:59:46.874411] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:49.311 00:16:49.311 real 0m0.396s 00:16:49.311 user 0m0.176s 00:16:49.311 sys 0m0.116s 00:16:49.311 11:59:46 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:49.311 11:59:46 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:49.311 ************************************ 00:16:49.311 END TEST bdev_json_nonenclosed 00:16:49.311 ************************************ 00:16:49.311 11:59:47 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:49.311 11:59:47 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:16:49.311 11:59:47 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:49.311 11:59:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:49.311 ************************************ 00:16:49.311 START TEST bdev_json_nonarray 00:16:49.311 ************************************ 00:16:49.311 11:59:47 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:49.572 [2024-12-06 11:59:47.153113] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:49.572 [2024-12-06 11:59:47.153253] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100516 ] 00:16:49.572 [2024-12-06 11:59:47.300535] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.832 [2024-12-06 11:59:47.354419] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.832 [2024-12-06 11:59:47.354565] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:49.832 [2024-12-06 11:59:47.354588] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:49.832 [2024-12-06 11:59:47.354604] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:49.832 00:16:49.832 real 0m0.407s 00:16:49.832 user 0m0.172s 00:16:49.832 sys 0m0.130s 00:16:49.832 11:59:47 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:49.832 11:59:47 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:49.832 ************************************ 00:16:49.832 END TEST bdev_json_nonarray 00:16:49.832 ************************************ 00:16:49.832 11:59:47 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:16:49.832 11:59:47 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:16:49.832 11:59:47 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:16:49.832 11:59:47 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:16:49.832 11:59:47 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:16:49.832 11:59:47 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:49.832 11:59:47 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:49.832 11:59:47 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:16:49.832 11:59:47 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:16:49.832 11:59:47 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:16:49.832 11:59:47 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:16:49.832 00:16:49.832 real 0m34.772s 00:16:49.832 user 0m47.442s 00:16:49.832 sys 0m4.524s 00:16:49.832 11:59:47 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:49.832 11:59:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:49.832 ************************************ 00:16:49.832 END TEST blockdev_raid5f 00:16:49.832 ************************************ 00:16:50.093 11:59:47 -- spdk/autotest.sh@194 -- # uname -s 00:16:50.093 11:59:47 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:16:50.093 11:59:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:16:50.093 11:59:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:16:50.093 11:59:47 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:16:50.093 11:59:47 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:16:50.093 11:59:47 -- spdk/autotest.sh@256 -- # timing_exit lib 00:16:50.093 11:59:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:50.093 11:59:47 -- common/autotest_common.sh@10 -- # set +x 00:16:50.093 11:59:47 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:16:50.093 11:59:47 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:16:50.093 11:59:47 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:16:50.093 11:59:47 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:16:50.093 11:59:47 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:16:50.093 11:59:47 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:16:50.093 11:59:47 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:16:50.093 11:59:47 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:16:50.093 11:59:47 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:16:50.093 11:59:47 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:16:50.093 11:59:47 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:16:50.093 11:59:47 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:16:50.093 11:59:47 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:16:50.093 11:59:47 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:16:50.093 11:59:47 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:16:50.093 11:59:47 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:16:50.093 11:59:47 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:16:50.093 11:59:47 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:16:50.093 11:59:47 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:16:50.093 11:59:47 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:16:50.093 11:59:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:50.093 11:59:47 -- common/autotest_common.sh@10 -- # set +x 00:16:50.093 11:59:47 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:16:50.093 11:59:47 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:16:50.093 11:59:47 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:16:50.093 11:59:47 -- common/autotest_common.sh@10 -- # set +x 00:16:52.636 INFO: APP EXITING 00:16:52.636 INFO: killing all VMs 00:16:52.636 INFO: killing vhost app 00:16:52.636 INFO: EXIT DONE 00:16:52.636 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:52.897 Waiting for block devices as requested 00:16:52.897 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:52.897 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:53.838 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:53.838 Cleaning 00:16:53.838 Removing: /var/run/dpdk/spdk0/config 00:16:53.838 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:16:53.838 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:16:53.838 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:16:53.838 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:16:53.838 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:16:53.838 Removing: /var/run/dpdk/spdk0/hugepage_info 00:16:53.838 Removing: /dev/shm/spdk_tgt_trace.pid68815 00:16:53.838 Removing: /var/run/dpdk/spdk0 00:16:53.838 Removing: /var/run/dpdk/spdk_pid100121 00:16:53.838 Removing: /var/run/dpdk/spdk_pid100290 00:16:53.838 Removing: /var/run/dpdk/spdk_pid100372 00:16:53.838 Removing: /var/run/dpdk/spdk_pid100454 00:16:53.838 Removing: /var/run/dpdk/spdk_pid100496 00:16:53.838 Removing: /var/run/dpdk/spdk_pid100516 00:16:53.838 Removing: /var/run/dpdk/spdk_pid68651 00:16:54.098 Removing: /var/run/dpdk/spdk_pid68815 00:16:54.098 Removing: /var/run/dpdk/spdk_pid69022 00:16:54.098 Removing: /var/run/dpdk/spdk_pid69108 00:16:54.098 Removing: /var/run/dpdk/spdk_pid69138 00:16:54.098 Removing: /var/run/dpdk/spdk_pid69244 00:16:54.098 Removing: /var/run/dpdk/spdk_pid69262 00:16:54.098 Removing: /var/run/dpdk/spdk_pid69450 00:16:54.098 Removing: /var/run/dpdk/spdk_pid69529 00:16:54.098 Removing: /var/run/dpdk/spdk_pid69614 00:16:54.098 Removing: /var/run/dpdk/spdk_pid69703 00:16:54.098 Removing: /var/run/dpdk/spdk_pid69789 00:16:54.098 Removing: /var/run/dpdk/spdk_pid69829 00:16:54.098 Removing: /var/run/dpdk/spdk_pid69865 00:16:54.098 Removing: /var/run/dpdk/spdk_pid69930 00:16:54.098 Removing: /var/run/dpdk/spdk_pid70036 00:16:54.098 Removing: /var/run/dpdk/spdk_pid70461 00:16:54.098 Removing: /var/run/dpdk/spdk_pid70514 00:16:54.098 Removing: /var/run/dpdk/spdk_pid70561 00:16:54.098 Removing: /var/run/dpdk/spdk_pid70577 00:16:54.098 Removing: /var/run/dpdk/spdk_pid70640 00:16:54.098 Removing: /var/run/dpdk/spdk_pid70651 00:16:54.098 Removing: /var/run/dpdk/spdk_pid70720 00:16:54.099 Removing: /var/run/dpdk/spdk_pid70738 00:16:54.099 Removing: /var/run/dpdk/spdk_pid70785 00:16:54.099 Removing: /var/run/dpdk/spdk_pid70798 00:16:54.099 Removing: /var/run/dpdk/spdk_pid70845 00:16:54.099 Removing: /var/run/dpdk/spdk_pid70858 00:16:54.099 Removing: /var/run/dpdk/spdk_pid70996 00:16:54.099 Removing: /var/run/dpdk/spdk_pid71027 00:16:54.099 Removing: /var/run/dpdk/spdk_pid71116 00:16:54.099 Removing: /var/run/dpdk/spdk_pid72287 00:16:54.099 Removing: /var/run/dpdk/spdk_pid72482 00:16:54.099 Removing: /var/run/dpdk/spdk_pid72611 00:16:54.099 Removing: /var/run/dpdk/spdk_pid73221 00:16:54.099 Removing: /var/run/dpdk/spdk_pid73416 00:16:54.099 Removing: /var/run/dpdk/spdk_pid73545 00:16:54.099 Removing: /var/run/dpdk/spdk_pid74150 00:16:54.099 Removing: /var/run/dpdk/spdk_pid74469 00:16:54.099 Removing: /var/run/dpdk/spdk_pid74598 00:16:54.099 Removing: /var/run/dpdk/spdk_pid75928 00:16:54.099 Removing: /var/run/dpdk/spdk_pid76170 00:16:54.099 Removing: /var/run/dpdk/spdk_pid76299 00:16:54.099 Removing: /var/run/dpdk/spdk_pid77629 00:16:54.099 Removing: /var/run/dpdk/spdk_pid77871 00:16:54.099 Removing: /var/run/dpdk/spdk_pid78000 00:16:54.099 Removing: /var/run/dpdk/spdk_pid79341 00:16:54.099 Removing: /var/run/dpdk/spdk_pid79770 00:16:54.099 Removing: /var/run/dpdk/spdk_pid79899 00:16:54.099 Removing: /var/run/dpdk/spdk_pid81328 00:16:54.099 Removing: /var/run/dpdk/spdk_pid81572 00:16:54.099 Removing: /var/run/dpdk/spdk_pid81706 00:16:54.099 Removing: /var/run/dpdk/spdk_pid83125 00:16:54.099 Removing: /var/run/dpdk/spdk_pid83373 00:16:54.359 Removing: /var/run/dpdk/spdk_pid83508 00:16:54.359 Removing: /var/run/dpdk/spdk_pid84927 00:16:54.359 Removing: /var/run/dpdk/spdk_pid85403 00:16:54.359 Removing: /var/run/dpdk/spdk_pid85532 00:16:54.359 Removing: /var/run/dpdk/spdk_pid85665 00:16:54.359 Removing: /var/run/dpdk/spdk_pid86065 00:16:54.359 Removing: /var/run/dpdk/spdk_pid86766 00:16:54.359 Removing: /var/run/dpdk/spdk_pid87133 00:16:54.359 Removing: /var/run/dpdk/spdk_pid87799 00:16:54.359 Removing: /var/run/dpdk/spdk_pid88223 00:16:54.359 Removing: /var/run/dpdk/spdk_pid88953 00:16:54.359 Removing: /var/run/dpdk/spdk_pid89343 00:16:54.359 Removing: /var/run/dpdk/spdk_pid91257 00:16:54.359 Removing: /var/run/dpdk/spdk_pid91684 00:16:54.359 Removing: /var/run/dpdk/spdk_pid92107 00:16:54.359 Removing: /var/run/dpdk/spdk_pid94141 00:16:54.359 Removing: /var/run/dpdk/spdk_pid94611 00:16:54.359 Removing: /var/run/dpdk/spdk_pid95111 00:16:54.359 Removing: /var/run/dpdk/spdk_pid96143 00:16:54.359 Removing: /var/run/dpdk/spdk_pid96456 00:16:54.359 Removing: /var/run/dpdk/spdk_pid97366 00:16:54.359 Removing: /var/run/dpdk/spdk_pid97678 00:16:54.359 Removing: /var/run/dpdk/spdk_pid98595 00:16:54.359 Removing: /var/run/dpdk/spdk_pid98913 00:16:54.359 Removing: /var/run/dpdk/spdk_pid99577 00:16:54.359 Removing: /var/run/dpdk/spdk_pid99830 00:16:54.359 Removing: /var/run/dpdk/spdk_pid99871 00:16:54.359 Removing: /var/run/dpdk/spdk_pid99895 00:16:54.359 Clean 00:16:54.359 11:59:52 -- common/autotest_common.sh@1451 -- # return 0 00:16:54.359 11:59:52 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:16:54.359 11:59:52 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:54.359 11:59:52 -- common/autotest_common.sh@10 -- # set +x 00:16:54.359 11:59:52 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:16:54.359 11:59:52 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:54.359 11:59:52 -- common/autotest_common.sh@10 -- # set +x 00:16:54.620 11:59:52 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:16:54.620 11:59:52 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:16:54.620 11:59:52 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:16:54.620 11:59:52 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:16:54.620 11:59:52 -- spdk/autotest.sh@394 -- # hostname 00:16:54.620 11:59:52 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:16:54.620 geninfo: WARNING: invalid characters removed from testname! 00:17:16.608 12:00:14 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:19.141 12:00:16 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:21.062 12:00:18 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:22.969 12:00:20 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:25.511 12:00:22 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:27.422 12:00:24 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:29.327 12:00:27 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:17:29.589 12:00:27 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:17:29.589 12:00:27 -- common/autotest_common.sh@1681 -- $ lcov --version 00:17:29.589 12:00:27 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:17:29.589 12:00:27 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:17:29.589 12:00:27 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:17:29.589 12:00:27 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:17:29.589 12:00:27 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:17:29.589 12:00:27 -- scripts/common.sh@336 -- $ IFS=.-: 00:17:29.589 12:00:27 -- scripts/common.sh@336 -- $ read -ra ver1 00:17:29.589 12:00:27 -- scripts/common.sh@337 -- $ IFS=.-: 00:17:29.589 12:00:27 -- scripts/common.sh@337 -- $ read -ra ver2 00:17:29.589 12:00:27 -- scripts/common.sh@338 -- $ local 'op=<' 00:17:29.589 12:00:27 -- scripts/common.sh@340 -- $ ver1_l=2 00:17:29.589 12:00:27 -- scripts/common.sh@341 -- $ ver2_l=1 00:17:29.589 12:00:27 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:17:29.589 12:00:27 -- scripts/common.sh@344 -- $ case "$op" in 00:17:29.589 12:00:27 -- scripts/common.sh@345 -- $ : 1 00:17:29.589 12:00:27 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:17:29.589 12:00:27 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:29.589 12:00:27 -- scripts/common.sh@365 -- $ decimal 1 00:17:29.589 12:00:27 -- scripts/common.sh@353 -- $ local d=1 00:17:29.589 12:00:27 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:17:29.589 12:00:27 -- scripts/common.sh@355 -- $ echo 1 00:17:29.589 12:00:27 -- scripts/common.sh@365 -- $ ver1[v]=1 00:17:29.589 12:00:27 -- scripts/common.sh@366 -- $ decimal 2 00:17:29.589 12:00:27 -- scripts/common.sh@353 -- $ local d=2 00:17:29.589 12:00:27 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:17:29.589 12:00:27 -- scripts/common.sh@355 -- $ echo 2 00:17:29.589 12:00:27 -- scripts/common.sh@366 -- $ ver2[v]=2 00:17:29.589 12:00:27 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:17:29.589 12:00:27 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:17:29.589 12:00:27 -- scripts/common.sh@368 -- $ return 0 00:17:29.589 12:00:27 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:29.589 12:00:27 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:17:29.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.589 --rc genhtml_branch_coverage=1 00:17:29.589 --rc genhtml_function_coverage=1 00:17:29.589 --rc genhtml_legend=1 00:17:29.589 --rc geninfo_all_blocks=1 00:17:29.589 --rc geninfo_unexecuted_blocks=1 00:17:29.589 00:17:29.589 ' 00:17:29.589 12:00:27 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:17:29.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.589 --rc genhtml_branch_coverage=1 00:17:29.589 --rc genhtml_function_coverage=1 00:17:29.589 --rc genhtml_legend=1 00:17:29.589 --rc geninfo_all_blocks=1 00:17:29.589 --rc geninfo_unexecuted_blocks=1 00:17:29.589 00:17:29.589 ' 00:17:29.589 12:00:27 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:17:29.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.589 --rc genhtml_branch_coverage=1 00:17:29.589 --rc genhtml_function_coverage=1 00:17:29.589 --rc genhtml_legend=1 00:17:29.589 --rc geninfo_all_blocks=1 00:17:29.589 --rc geninfo_unexecuted_blocks=1 00:17:29.589 00:17:29.589 ' 00:17:29.589 12:00:27 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:17:29.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.589 --rc genhtml_branch_coverage=1 00:17:29.589 --rc genhtml_function_coverage=1 00:17:29.589 --rc genhtml_legend=1 00:17:29.589 --rc geninfo_all_blocks=1 00:17:29.589 --rc geninfo_unexecuted_blocks=1 00:17:29.589 00:17:29.589 ' 00:17:29.589 12:00:27 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:29.589 12:00:27 -- scripts/common.sh@15 -- $ shopt -s extglob 00:17:29.589 12:00:27 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:17:29.589 12:00:27 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:29.589 12:00:27 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:29.589 12:00:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.589 12:00:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.589 12:00:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.589 12:00:27 -- paths/export.sh@5 -- $ export PATH 00:17:29.589 12:00:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.589 12:00:27 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:17:29.589 12:00:27 -- common/autobuild_common.sh@479 -- $ date +%s 00:17:29.589 12:00:27 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1733486427.XXXXXX 00:17:29.589 12:00:27 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1733486427.LuadUF 00:17:29.589 12:00:27 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:17:29.589 12:00:27 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:17:29.589 12:00:27 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:17:29.589 12:00:27 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:17:29.589 12:00:27 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:17:29.589 12:00:27 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:17:29.589 12:00:27 -- common/autobuild_common.sh@495 -- $ get_config_params 00:17:29.589 12:00:27 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:17:29.589 12:00:27 -- common/autotest_common.sh@10 -- $ set +x 00:17:29.589 12:00:27 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:17:29.589 12:00:27 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:17:29.589 12:00:27 -- pm/common@17 -- $ local monitor 00:17:29.589 12:00:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:29.589 12:00:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:29.589 12:00:27 -- pm/common@25 -- $ sleep 1 00:17:29.589 12:00:27 -- pm/common@21 -- $ date +%s 00:17:29.589 12:00:27 -- pm/common@21 -- $ date +%s 00:17:29.589 12:00:27 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1733486427 00:17:29.589 12:00:27 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1733486427 00:17:29.850 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1733486427_collect-cpu-load.pm.log 00:17:29.850 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1733486427_collect-vmstat.pm.log 00:17:30.791 12:00:28 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:17:30.791 12:00:28 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:17:30.791 12:00:28 -- spdk/autopackage.sh@14 -- $ timing_finish 00:17:30.791 12:00:28 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:17:30.791 12:00:28 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:17:30.791 12:00:28 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:17:30.791 12:00:28 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:17:30.791 12:00:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:17:30.791 12:00:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:17:30.791 12:00:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:30.791 12:00:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:17:30.791 12:00:28 -- pm/common@44 -- $ pid=102027 00:17:30.791 12:00:28 -- pm/common@50 -- $ kill -TERM 102027 00:17:30.791 12:00:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:30.791 12:00:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:17:30.791 12:00:28 -- pm/common@44 -- $ pid=102029 00:17:30.791 12:00:28 -- pm/common@50 -- $ kill -TERM 102029 00:17:30.791 + [[ -n 6159 ]] 00:17:30.791 + sudo kill 6159 00:17:30.800 [Pipeline] } 00:17:30.813 [Pipeline] // timeout 00:17:30.817 [Pipeline] } 00:17:30.829 [Pipeline] // stage 00:17:30.833 [Pipeline] } 00:17:30.846 [Pipeline] // catchError 00:17:30.853 [Pipeline] stage 00:17:30.855 [Pipeline] { (Stop VM) 00:17:30.865 [Pipeline] sh 00:17:31.144 + vagrant halt 00:17:33.683 ==> default: Halting domain... 00:17:41.828 [Pipeline] sh 00:17:42.111 + vagrant destroy -f 00:17:44.652 ==> default: Removing domain... 00:17:44.666 [Pipeline] sh 00:17:44.951 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:17:44.961 [Pipeline] } 00:17:44.976 [Pipeline] // stage 00:17:44.982 [Pipeline] } 00:17:44.996 [Pipeline] // dir 00:17:45.002 [Pipeline] } 00:17:45.017 [Pipeline] // wrap 00:17:45.024 [Pipeline] } 00:17:45.037 [Pipeline] // catchError 00:17:45.047 [Pipeline] stage 00:17:45.049 [Pipeline] { (Epilogue) 00:17:45.063 [Pipeline] sh 00:17:45.350 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:17:49.582 [Pipeline] catchError 00:17:49.584 [Pipeline] { 00:17:49.596 [Pipeline] sh 00:17:49.881 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:17:50.141 Artifacts sizes are good 00:17:50.152 [Pipeline] } 00:17:50.167 [Pipeline] // catchError 00:17:50.180 [Pipeline] archiveArtifacts 00:17:50.188 Archiving artifacts 00:17:50.285 [Pipeline] cleanWs 00:17:50.306 [WS-CLEANUP] Deleting project workspace... 00:17:50.306 [WS-CLEANUP] Deferred wipeout is used... 00:17:50.321 [WS-CLEANUP] done 00:17:50.324 [Pipeline] } 00:17:50.341 [Pipeline] // stage 00:17:50.346 [Pipeline] } 00:17:50.361 [Pipeline] // node 00:17:50.366 [Pipeline] End of Pipeline 00:17:50.407 Finished: SUCCESS